2026-03-03 00:00:12.352526 | Job console starting 2026-03-03 00:00:12.372248 | Updating git repos 2026-03-03 00:00:12.912347 | Cloning repos into workspace 2026-03-03 00:00:13.276705 | Restoring repo states 2026-03-03 00:00:13.321729 | Merging changes 2026-03-03 00:00:13.321751 | Checking out repos 2026-03-03 00:00:13.991356 | Preparing playbooks 2026-03-03 00:00:16.036662 | Running Ansible setup 2026-03-03 00:00:24.102246 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-03 00:00:25.721829 | 2026-03-03 00:00:25.721943 | PLAY [Base pre] 2026-03-03 00:00:25.751626 | 2026-03-03 00:00:25.751735 | TASK [Setup log path fact] 2026-03-03 00:00:25.779719 | orchestrator | ok 2026-03-03 00:00:25.805463 | 2026-03-03 00:00:25.805583 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-03 00:00:25.873351 | orchestrator | ok 2026-03-03 00:00:25.902108 | 2026-03-03 00:00:25.902209 | TASK [emit-job-header : Print job information] 2026-03-03 00:00:25.966473 | # Job Information 2026-03-03 00:00:25.966644 | Ansible Version: 2.16.14 2026-03-03 00:00:25.966675 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-03 00:00:25.966703 | Pipeline: periodic-midnight 2026-03-03 00:00:25.966760 | Executor: 521e9411259a 2026-03-03 00:00:25.966781 | Triggered by: https://github.com/osism/testbed 2026-03-03 00:00:25.966799 | Event ID: 4a84379285774ed9826c808e13cdc87a 2026-03-03 00:00:25.973024 | 2026-03-03 00:00:25.973117 | LOOP [emit-job-header : Print node information] 2026-03-03 00:00:26.280325 | orchestrator | ok: 2026-03-03 00:00:26.280535 | orchestrator | # Node Information 2026-03-03 00:00:26.280569 | orchestrator | Inventory Hostname: orchestrator 2026-03-03 00:00:26.280591 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-03 00:00:26.280609 | orchestrator | Username: zuul-testbed06 2026-03-03 00:00:26.280627 | orchestrator | Distro: Debian 12.13 2026-03-03 00:00:26.280646 | orchestrator | Provider: static-testbed 2026-03-03 00:00:26.280664 | orchestrator | Region: 2026-03-03 00:00:26.280681 | orchestrator | Label: testbed-orchestrator 2026-03-03 00:00:26.280698 | orchestrator | Product Name: OpenStack Nova 2026-03-03 00:00:26.280714 | orchestrator | Interface IP: 81.163.193.140 2026-03-03 00:00:26.299900 | 2026-03-03 00:00:26.300002 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-03 00:00:27.397618 | orchestrator -> localhost | changed 2026-03-03 00:00:27.404142 | 2026-03-03 00:00:27.404242 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-03 00:00:29.660556 | orchestrator -> localhost | changed 2026-03-03 00:00:29.674444 | 2026-03-03 00:00:29.674550 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-03 00:00:30.142521 | orchestrator -> localhost | ok 2026-03-03 00:00:30.148330 | 2026-03-03 00:00:30.148435 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-03 00:00:30.185698 | orchestrator | ok 2026-03-03 00:00:30.209336 | orchestrator | included: /var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-03 00:00:30.223782 | 2026-03-03 00:00:30.223872 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-03 00:00:34.904317 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-03 00:00:34.904490 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/work/8999bda436aa4417a08a6d306d807d2f_id_rsa 2026-03-03 00:00:34.904521 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/work/8999bda436aa4417a08a6d306d807d2f_id_rsa.pub 2026-03-03 00:00:34.904541 | orchestrator -> localhost | The key fingerprint is: 2026-03-03 00:00:34.904562 | orchestrator -> localhost | SHA256:emwTzjXipv3emG/E3+wycYzwZIIXot5n07Q3/8yvepQ zuul-build-sshkey 2026-03-03 00:00:34.904579 | orchestrator -> localhost | The key's randomart image is: 2026-03-03 00:00:34.904608 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-03 00:00:34.904626 | orchestrator -> localhost | | | 2026-03-03 00:00:34.904643 | orchestrator -> localhost | | . . | 2026-03-03 00:00:34.904660 | orchestrator -> localhost | | . o . | 2026-03-03 00:00:34.904679 | orchestrator -> localhost | | . . + + | 2026-03-03 00:00:34.904696 | orchestrator -> localhost | | .S.oo O = | 2026-03-03 00:00:34.904714 | orchestrator -> localhost | | *.+..* E.+| 2026-03-03 00:00:34.904733 | orchestrator -> localhost | | . X + + *o| 2026-03-03 00:00:34.904751 | orchestrator -> localhost | | * . +. =o+| 2026-03-03 00:00:34.904770 | orchestrator -> localhost | | . .o=o+o.=O| 2026-03-03 00:00:34.904788 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-03 00:00:34.904832 | orchestrator -> localhost | ok: Runtime: 0:00:03.023281 2026-03-03 00:00:34.911220 | 2026-03-03 00:00:34.911317 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-03 00:00:34.938924 | orchestrator | ok 2026-03-03 00:00:34.960592 | orchestrator | included: /var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-03 00:00:34.978421 | 2026-03-03 00:00:34.978519 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-03 00:00:35.033194 | orchestrator | skipping: Conditional result was False 2026-03-03 00:00:35.039740 | 2026-03-03 00:00:35.039828 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-03 00:00:35.865562 | orchestrator | changed 2026-03-03 00:00:35.870656 | 2026-03-03 00:00:35.870733 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-03 00:00:36.147918 | orchestrator | ok 2026-03-03 00:00:36.159077 | 2026-03-03 00:00:36.159189 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-03 00:00:36.664893 | orchestrator | ok 2026-03-03 00:00:36.669805 | 2026-03-03 00:00:36.669884 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-03 00:00:37.179096 | orchestrator | ok 2026-03-03 00:00:37.196594 | 2026-03-03 00:00:37.196687 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-03 00:00:37.243213 | orchestrator | skipping: Conditional result was False 2026-03-03 00:00:37.249056 | 2026-03-03 00:00:37.249144 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-03 00:00:38.592468 | orchestrator -> localhost | changed 2026-03-03 00:00:38.622112 | 2026-03-03 00:00:38.622214 | TASK [add-build-sshkey : Add back temp key] 2026-03-03 00:00:39.732980 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/work/8999bda436aa4417a08a6d306d807d2f_id_rsa (zuul-build-sshkey) 2026-03-03 00:00:39.733193 | orchestrator -> localhost | ok: Runtime: 0:00:00.024428 2026-03-03 00:00:39.739150 | 2026-03-03 00:00:39.739228 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-03 00:00:40.366131 | orchestrator | ok 2026-03-03 00:00:40.381846 | 2026-03-03 00:00:40.381941 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-03 00:00:40.464150 | orchestrator | skipping: Conditional result was False 2026-03-03 00:00:40.571321 | 2026-03-03 00:00:40.571415 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-03 00:00:41.181111 | orchestrator | ok 2026-03-03 00:00:41.227784 | 2026-03-03 00:00:41.231541 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-03 00:00:41.276660 | orchestrator | ok 2026-03-03 00:00:41.288169 | 2026-03-03 00:00:41.288268 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-03 00:00:41.882492 | orchestrator -> localhost | ok 2026-03-03 00:00:41.890541 | 2026-03-03 00:00:41.890624 | TASK [validate-host : Collect information about the host] 2026-03-03 00:00:43.298665 | orchestrator | ok 2026-03-03 00:00:43.348583 | 2026-03-03 00:00:43.348697 | TASK [validate-host : Sanitize hostname] 2026-03-03 00:00:43.499123 | orchestrator | ok 2026-03-03 00:00:43.503811 | 2026-03-03 00:00:43.503909 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-03 00:00:44.667807 | orchestrator -> localhost | changed 2026-03-03 00:00:44.673870 | 2026-03-03 00:00:44.673959 | TASK [validate-host : Collect information about zuul worker] 2026-03-03 00:00:45.301792 | orchestrator | ok 2026-03-03 00:00:45.306245 | 2026-03-03 00:00:45.306331 | TASK [validate-host : Write out all zuul information for each host] 2026-03-03 00:00:46.439839 | orchestrator -> localhost | changed 2026-03-03 00:00:46.452024 | 2026-03-03 00:00:46.452140 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-03 00:00:46.761708 | orchestrator | ok 2026-03-03 00:00:46.771534 | 2026-03-03 00:00:46.771636 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-03 00:02:14.056864 | orchestrator | changed: 2026-03-03 00:02:14.058467 | orchestrator | .d..t...... src/ 2026-03-03 00:02:14.058532 | orchestrator | .d..t...... src/github.com/ 2026-03-03 00:02:14.058565 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-03 00:02:14.058587 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-03 00:02:14.058608 | orchestrator | RedHat.yml 2026-03-03 00:02:14.076256 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-03 00:02:14.076274 | orchestrator | RedHat.yml 2026-03-03 00:02:14.076328 | orchestrator | = 2.2.0"... 2026-03-03 00:02:24.123685 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-03 00:02:24.142601 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-03 00:02:24.303370 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-03 00:02:25.059261 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-03 00:02:25.136046 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-03 00:02:25.699166 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-03 00:02:25.772499 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-03 00:02:26.198156 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-03 00:02:26.198229 | orchestrator | 2026-03-03 00:02:26.198236 | orchestrator | Providers are signed by their developers. 2026-03-03 00:02:26.198241 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-03 00:02:26.198253 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-03 00:02:26.198287 | orchestrator | 2026-03-03 00:02:26.198293 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-03 00:02:26.198297 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-03 00:02:26.198319 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-03 00:02:26.198330 | orchestrator | you run "tofu init" in the future. 2026-03-03 00:02:26.198788 | orchestrator | 2026-03-03 00:02:26.198839 | orchestrator | OpenTofu has been successfully initialized! 2026-03-03 00:02:26.198862 | orchestrator | 2026-03-03 00:02:26.198868 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-03 00:02:26.198873 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-03 00:02:26.198877 | orchestrator | should now work. 2026-03-03 00:02:26.198881 | orchestrator | 2026-03-03 00:02:26.198885 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-03 00:02:26.198889 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-03 00:02:26.198900 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-03 00:02:26.385631 | orchestrator | Created and switched to workspace "ci"! 2026-03-03 00:02:26.385779 | orchestrator | 2026-03-03 00:02:26.385789 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-03 00:02:26.385795 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-03 00:02:26.385799 | orchestrator | for this configuration. 2026-03-03 00:02:26.564834 | orchestrator | ci.auto.tfvars 2026-03-03 00:02:26.582670 | orchestrator | default_custom.tf 2026-03-03 00:02:27.634573 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-03 00:02:28.204512 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-03 00:02:28.428745 | orchestrator | 2026-03-03 00:02:28.428855 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-03 00:02:28.428874 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-03 00:02:28.428887 | orchestrator | + create 2026-03-03 00:02:28.428899 | orchestrator | <= read (data resources) 2026-03-03 00:02:28.428911 | orchestrator | 2026-03-03 00:02:28.428922 | orchestrator | OpenTofu will perform the following actions: 2026-03-03 00:02:28.428956 | orchestrator | 2026-03-03 00:02:28.428968 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-03 00:02:28.428979 | orchestrator | # (config refers to values not yet known) 2026-03-03 00:02:28.428989 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-03 00:02:28.429001 | orchestrator | + checksum = (known after apply) 2026-03-03 00:02:28.429012 | orchestrator | + created_at = (known after apply) 2026-03-03 00:02:28.429023 | orchestrator | + file = (known after apply) 2026-03-03 00:02:28.429033 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.429088 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.429100 | orchestrator | + min_disk_gb = (known after apply) 2026-03-03 00:02:28.429112 | orchestrator | + min_ram_mb = (known after apply) 2026-03-03 00:02:28.429123 | orchestrator | + most_recent = true 2026-03-03 00:02:28.429134 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.429145 | orchestrator | + protected = (known after apply) 2026-03-03 00:02:28.429156 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.429171 | orchestrator | + schema = (known after apply) 2026-03-03 00:02:28.429182 | orchestrator | + size_bytes = (known after apply) 2026-03-03 00:02:28.429193 | orchestrator | + tags = (known after apply) 2026-03-03 00:02:28.429204 | orchestrator | + updated_at = (known after apply) 2026-03-03 00:02:28.429215 | orchestrator | } 2026-03-03 00:02:28.429226 | orchestrator | 2026-03-03 00:02:28.429237 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-03 00:02:28.429248 | orchestrator | # (config refers to values not yet known) 2026-03-03 00:02:28.429259 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-03 00:02:28.429270 | orchestrator | + checksum = (known after apply) 2026-03-03 00:02:28.429281 | orchestrator | + created_at = (known after apply) 2026-03-03 00:02:28.429291 | orchestrator | + file = (known after apply) 2026-03-03 00:02:28.429302 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.429312 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.429323 | orchestrator | + min_disk_gb = (known after apply) 2026-03-03 00:02:28.429334 | orchestrator | + min_ram_mb = (known after apply) 2026-03-03 00:02:28.429345 | orchestrator | + most_recent = true 2026-03-03 00:02:28.429356 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.429366 | orchestrator | + protected = (known after apply) 2026-03-03 00:02:28.429377 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.429387 | orchestrator | + schema = (known after apply) 2026-03-03 00:02:28.429398 | orchestrator | + size_bytes = (known after apply) 2026-03-03 00:02:28.429409 | orchestrator | + tags = (known after apply) 2026-03-03 00:02:28.429419 | orchestrator | + updated_at = (known after apply) 2026-03-03 00:02:28.429430 | orchestrator | } 2026-03-03 00:02:28.429441 | orchestrator | 2026-03-03 00:02:28.429452 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-03 00:02:28.429463 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-03 00:02:28.429474 | orchestrator | + content = (known after apply) 2026-03-03 00:02:28.429485 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-03 00:02:28.429496 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-03 00:02:28.429507 | orchestrator | + content_md5 = (known after apply) 2026-03-03 00:02:28.429517 | orchestrator | + content_sha1 = (known after apply) 2026-03-03 00:02:28.429528 | orchestrator | + content_sha256 = (known after apply) 2026-03-03 00:02:28.429539 | orchestrator | + content_sha512 = (known after apply) 2026-03-03 00:02:28.429549 | orchestrator | + directory_permission = "0777" 2026-03-03 00:02:28.429560 | orchestrator | + file_permission = "0644" 2026-03-03 00:02:28.429571 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-03 00:02:28.429582 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.429593 | orchestrator | } 2026-03-03 00:02:28.429610 | orchestrator | 2026-03-03 00:02:28.429621 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-03 00:02:28.429632 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-03 00:02:28.429644 | orchestrator | + content = (known after apply) 2026-03-03 00:02:28.429654 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-03 00:02:28.429665 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-03 00:02:28.429676 | orchestrator | + content_md5 = (known after apply) 2026-03-03 00:02:28.429686 | orchestrator | + content_sha1 = (known after apply) 2026-03-03 00:02:28.429697 | orchestrator | + content_sha256 = (known after apply) 2026-03-03 00:02:28.429730 | orchestrator | + content_sha512 = (known after apply) 2026-03-03 00:02:28.429741 | orchestrator | + directory_permission = "0777" 2026-03-03 00:02:28.429752 | orchestrator | + file_permission = "0644" 2026-03-03 00:02:28.429770 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-03 00:02:28.429781 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.429792 | orchestrator | } 2026-03-03 00:02:28.429803 | orchestrator | 2026-03-03 00:02:28.429824 | orchestrator | # local_file.inventory will be created 2026-03-03 00:02:28.429835 | orchestrator | + resource "local_file" "inventory" { 2026-03-03 00:02:28.429846 | orchestrator | + content = (known after apply) 2026-03-03 00:02:28.429857 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-03 00:02:28.429868 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-03 00:02:28.429879 | orchestrator | + content_md5 = (known after apply) 2026-03-03 00:02:28.429890 | orchestrator | + content_sha1 = (known after apply) 2026-03-03 00:02:28.429901 | orchestrator | + content_sha256 = (known after apply) 2026-03-03 00:02:28.429912 | orchestrator | + content_sha512 = (known after apply) 2026-03-03 00:02:28.429924 | orchestrator | + directory_permission = "0777" 2026-03-03 00:02:28.429934 | orchestrator | + file_permission = "0644" 2026-03-03 00:02:28.429945 | orchestrator | + filename = "inventory.ci" 2026-03-03 00:02:28.429956 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.429967 | orchestrator | } 2026-03-03 00:02:28.429978 | orchestrator | 2026-03-03 00:02:28.429989 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-03 00:02:28.430000 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-03 00:02:28.430011 | orchestrator | + content = (sensitive value) 2026-03-03 00:02:28.430061 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-03 00:02:28.430073 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-03 00:02:28.430084 | orchestrator | + content_md5 = (known after apply) 2026-03-03 00:02:28.430094 | orchestrator | + content_sha1 = (known after apply) 2026-03-03 00:02:28.430105 | orchestrator | + content_sha256 = (known after apply) 2026-03-03 00:02:28.430116 | orchestrator | + content_sha512 = (known after apply) 2026-03-03 00:02:28.430127 | orchestrator | + directory_permission = "0700" 2026-03-03 00:02:28.430137 | orchestrator | + file_permission = "0600" 2026-03-03 00:02:28.430148 | orchestrator | + filename = ".id_rsa.ci" 2026-03-03 00:02:28.430159 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430169 | orchestrator | } 2026-03-03 00:02:28.430180 | orchestrator | 2026-03-03 00:02:28.430191 | orchestrator | # null_resource.node_semaphore will be created 2026-03-03 00:02:28.430202 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-03 00:02:28.430212 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430223 | orchestrator | } 2026-03-03 00:02:28.430233 | orchestrator | 2026-03-03 00:02:28.430245 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-03 00:02:28.430256 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-03 00:02:28.430267 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.430278 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.430288 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430299 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.430310 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.430321 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-03 00:02:28.430331 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.430342 | orchestrator | + size = 80 2026-03-03 00:02:28.430353 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.430363 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.430374 | orchestrator | } 2026-03-03 00:02:28.430384 | orchestrator | 2026-03-03 00:02:28.430395 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-03 00:02:28.430406 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-03 00:02:28.430417 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.430428 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.430439 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430460 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.430471 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.430482 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-03 00:02:28.430493 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.430504 | orchestrator | + size = 80 2026-03-03 00:02:28.430514 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.430525 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.430536 | orchestrator | } 2026-03-03 00:02:28.430546 | orchestrator | 2026-03-03 00:02:28.430557 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-03 00:02:28.430568 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-03 00:02:28.430579 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.430590 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.430600 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430611 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.430622 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.430632 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-03 00:02:28.430643 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.430654 | orchestrator | + size = 80 2026-03-03 00:02:28.430664 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.430675 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.430686 | orchestrator | } 2026-03-03 00:02:28.430697 | orchestrator | 2026-03-03 00:02:28.430741 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-03 00:02:28.430752 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-03 00:02:28.430763 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.430774 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.430793 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430805 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.430815 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.430826 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-03 00:02:28.430837 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.430847 | orchestrator | + size = 80 2026-03-03 00:02:28.430858 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.430869 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.430880 | orchestrator | } 2026-03-03 00:02:28.430891 | orchestrator | 2026-03-03 00:02:28.430901 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-03 00:02:28.430912 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-03 00:02:28.430923 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.430934 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.430944 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.430955 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.430966 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.430982 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-03 00:02:28.430993 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.431004 | orchestrator | + size = 80 2026-03-03 00:02:28.431015 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.431026 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.431036 | orchestrator | } 2026-03-03 00:02:28.431047 | orchestrator | 2026-03-03 00:02:28.431058 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-03 00:02:28.431068 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-03 00:02:28.431079 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.431090 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.431100 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.431118 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.431129 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.431140 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-03 00:02:28.431151 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.431161 | orchestrator | + size = 80 2026-03-03 00:02:28.431172 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.431183 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.431195 | orchestrator | } 2026-03-03 00:02:28.431214 | orchestrator | 2026-03-03 00:02:28.431232 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-03 00:02:28.431250 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-03 00:02:28.431266 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.431284 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.431302 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.431321 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.431340 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.431359 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-03 00:02:28.431378 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.431396 | orchestrator | + size = 80 2026-03-03 00:02:28.431415 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.431432 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.431448 | orchestrator | } 2026-03-03 00:02:28.431460 | orchestrator | 2026-03-03 00:02:28.431470 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-03 00:02:28.431482 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.431493 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.431503 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.431514 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.431525 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.431535 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-03 00:02:28.431546 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.431556 | orchestrator | + size = 20 2026-03-03 00:02:28.431567 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.431578 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.431589 | orchestrator | } 2026-03-03 00:02:28.431599 | orchestrator | 2026-03-03 00:02:28.431610 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-03 00:02:28.431621 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.431632 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.431642 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.431653 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.431664 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.431674 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-03 00:02:28.431685 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.431695 | orchestrator | + size = 20 2026-03-03 00:02:28.431763 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.431775 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.431786 | orchestrator | } 2026-03-03 00:02:28.431796 | orchestrator | 2026-03-03 00:02:28.431806 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-03 00:02:28.431816 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.431826 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.431835 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.431845 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.431854 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.431864 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-03 00:02:28.431873 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.431892 | orchestrator | + size = 20 2026-03-03 00:02:28.431902 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.431911 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.431921 | orchestrator | } 2026-03-03 00:02:28.431930 | orchestrator | 2026-03-03 00:02:28.431939 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-03 00:02:28.431949 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.431959 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.431968 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.431985 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.431995 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.432004 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-03 00:02:28.432014 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432023 | orchestrator | + size = 20 2026-03-03 00:02:28.432033 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.432042 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.432052 | orchestrator | } 2026-03-03 00:02:28.432062 | orchestrator | 2026-03-03 00:02:28.432071 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-03 00:02:28.432081 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.432090 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.432100 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.432109 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.432119 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.432128 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-03 00:02:28.432138 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432153 | orchestrator | + size = 20 2026-03-03 00:02:28.432163 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.432173 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.432183 | orchestrator | } 2026-03-03 00:02:28.432195 | orchestrator | 2026-03-03 00:02:28.432210 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-03 00:02:28.432225 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.432241 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.432257 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.432273 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.432284 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.432293 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-03 00:02:28.432303 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432312 | orchestrator | + size = 20 2026-03-03 00:02:28.432322 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.432331 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.432340 | orchestrator | } 2026-03-03 00:02:28.432350 | orchestrator | 2026-03-03 00:02:28.432360 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-03 00:02:28.432369 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.432379 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.432388 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.432398 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.432407 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.432416 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-03 00:02:28.432426 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432435 | orchestrator | + size = 20 2026-03-03 00:02:28.432445 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.432454 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.432464 | orchestrator | } 2026-03-03 00:02:28.432473 | orchestrator | 2026-03-03 00:02:28.432483 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-03 00:02:28.432493 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.432509 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.432519 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.432529 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.432538 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.432548 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-03 00:02:28.432557 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432567 | orchestrator | + size = 20 2026-03-03 00:02:28.432577 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.432586 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.432596 | orchestrator | } 2026-03-03 00:02:28.432605 | orchestrator | 2026-03-03 00:02:28.432615 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-03 00:02:28.432624 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-03 00:02:28.432634 | orchestrator | + attachment = (known after apply) 2026-03-03 00:02:28.432643 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.432653 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.432662 | orchestrator | + metadata = (known after apply) 2026-03-03 00:02:28.432672 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-03 00:02:28.432681 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432691 | orchestrator | + size = 20 2026-03-03 00:02:28.432700 | orchestrator | + volume_retype_policy = "never" 2026-03-03 00:02:28.432730 | orchestrator | + volume_type = "ssd" 2026-03-03 00:02:28.432740 | orchestrator | } 2026-03-03 00:02:28.432749 | orchestrator | 2026-03-03 00:02:28.432759 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-03 00:02:28.432768 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-03 00:02:28.432778 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.432787 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.432797 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.432807 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.432816 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.432826 | orchestrator | + config_drive = true 2026-03-03 00:02:28.432835 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.432845 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.432854 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-03 00:02:28.432863 | orchestrator | + force_delete = false 2026-03-03 00:02:28.432873 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.432882 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.432892 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.432901 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.432911 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.432920 | orchestrator | + name = "testbed-manager" 2026-03-03 00:02:28.432929 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.432939 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.432949 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.432958 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.432973 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.432983 | orchestrator | + user_data = (sensitive value) 2026-03-03 00:02:28.432993 | orchestrator | 2026-03-03 00:02:28.433003 | orchestrator | + block_device { 2026-03-03 00:02:28.433012 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.433022 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.433036 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.433046 | orchestrator | + multiattach = false 2026-03-03 00:02:28.433056 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.433065 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.433085 | orchestrator | } 2026-03-03 00:02:28.433102 | orchestrator | 2026-03-03 00:02:28.433119 | orchestrator | + network { 2026-03-03 00:02:28.433130 | orchestrator | + access_network = false 2026-03-03 00:02:28.433139 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.433149 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.433158 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.433167 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.433177 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.433186 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.433196 | orchestrator | } 2026-03-03 00:02:28.433205 | orchestrator | } 2026-03-03 00:02:28.433215 | orchestrator | 2026-03-03 00:02:28.433224 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-03 00:02:28.433234 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-03 00:02:28.433243 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.433253 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.433262 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.433272 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.433282 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.433369 | orchestrator | + config_drive = true 2026-03-03 00:02:28.433842 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.433894 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.433960 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-03 00:02:28.433968 | orchestrator | + force_delete = false 2026-03-03 00:02:28.433976 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.433984 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.433992 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.434000 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.434043 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.434054 | orchestrator | + name = "testbed-node-0" 2026-03-03 00:02:28.434062 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.434169 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.434177 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.434185 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.434193 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.434201 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-03 00:02:28.434209 | orchestrator | 2026-03-03 00:02:28.434217 | orchestrator | + block_device { 2026-03-03 00:02:28.434387 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.434412 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.434421 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.434429 | orchestrator | + multiattach = false 2026-03-03 00:02:28.434452 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.434460 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.434851 | orchestrator | } 2026-03-03 00:02:28.434863 | orchestrator | 2026-03-03 00:02:28.434884 | orchestrator | + network { 2026-03-03 00:02:28.434961 | orchestrator | + access_network = false 2026-03-03 00:02:28.434971 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.435016 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.435025 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.435033 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.435058 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.435103 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.435112 | orchestrator | } 2026-03-03 00:02:28.435120 | orchestrator | } 2026-03-03 00:02:28.435128 | orchestrator | 2026-03-03 00:02:28.435265 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-03 00:02:28.435307 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-03 00:02:28.435316 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.435542 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.435553 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.435561 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.435569 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.435683 | orchestrator | + config_drive = true 2026-03-03 00:02:28.435723 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.435731 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.435739 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-03 00:02:28.435770 | orchestrator | + force_delete = false 2026-03-03 00:02:28.435869 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.435879 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.435887 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.435895 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.435902 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.435910 | orchestrator | + name = "testbed-node-1" 2026-03-03 00:02:28.435918 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.435926 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.435933 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.436042 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.436054 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.436062 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-03 00:02:28.436171 | orchestrator | 2026-03-03 00:02:28.436182 | orchestrator | + block_device { 2026-03-03 00:02:28.436189 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.436274 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.436435 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.436443 | orchestrator | + multiattach = false 2026-03-03 00:02:28.436471 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.436552 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.436844 | orchestrator | } 2026-03-03 00:02:28.436854 | orchestrator | 2026-03-03 00:02:28.436926 | orchestrator | + network { 2026-03-03 00:02:28.436984 | orchestrator | + access_network = false 2026-03-03 00:02:28.437126 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.437134 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.437142 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.437165 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.437187 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.437226 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.437299 | orchestrator | } 2026-03-03 00:02:28.437339 | orchestrator | } 2026-03-03 00:02:28.437375 | orchestrator | 2026-03-03 00:02:28.437437 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-03 00:02:28.437606 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-03 00:02:28.437723 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.437732 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.437739 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.437860 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.437938 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.437947 | orchestrator | + config_drive = true 2026-03-03 00:02:28.437954 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.437990 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.438106 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-03 00:02:28.438115 | orchestrator | + force_delete = false 2026-03-03 00:02:28.438151 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.438205 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.438212 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.438226 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.438259 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.438384 | orchestrator | + name = "testbed-node-2" 2026-03-03 00:02:28.438393 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.438399 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.438444 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.438580 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.438601 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.438608 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-03 00:02:28.438615 | orchestrator | 2026-03-03 00:02:28.438732 | orchestrator | + block_device { 2026-03-03 00:02:28.438739 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.438746 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.438986 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.439155 | orchestrator | + multiattach = false 2026-03-03 00:02:28.439337 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.439400 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.439420 | orchestrator | } 2026-03-03 00:02:28.439558 | orchestrator | 2026-03-03 00:02:28.439615 | orchestrator | + network { 2026-03-03 00:02:28.439760 | orchestrator | + access_network = false 2026-03-03 00:02:28.439816 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.439823 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.439830 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.439836 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.439843 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.439850 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.439856 | orchestrator | } 2026-03-03 00:02:28.439863 | orchestrator | } 2026-03-03 00:02:28.439899 | orchestrator | 2026-03-03 00:02:28.439907 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-03 00:02:28.439914 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-03 00:02:28.440030 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.440037 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.440107 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.440180 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.440188 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.440222 | orchestrator | + config_drive = true 2026-03-03 00:02:28.440325 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.440331 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.440337 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-03 00:02:28.440344 | orchestrator | + force_delete = false 2026-03-03 00:02:28.440524 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.440531 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.440699 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.440723 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.440729 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.440736 | orchestrator | + name = "testbed-node-3" 2026-03-03 00:02:28.440742 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.440748 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.440754 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.440860 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.440942 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.441025 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-03 00:02:28.441033 | orchestrator | 2026-03-03 00:02:28.441068 | orchestrator | + block_device { 2026-03-03 00:02:28.441160 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.441198 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.441204 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.441216 | orchestrator | + multiattach = false 2026-03-03 00:02:28.441222 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.441229 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.441306 | orchestrator | } 2026-03-03 00:02:28.441312 | orchestrator | 2026-03-03 00:02:28.441319 | orchestrator | + network { 2026-03-03 00:02:28.441325 | orchestrator | + access_network = false 2026-03-03 00:02:28.441486 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.441493 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.441500 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.441506 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.441512 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.441577 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.441669 | orchestrator | } 2026-03-03 00:02:28.441860 | orchestrator | } 2026-03-03 00:02:28.441867 | orchestrator | 2026-03-03 00:02:28.441874 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-03 00:02:28.441891 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-03 00:02:28.441898 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.441904 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.441911 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.441917 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.441923 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.441929 | orchestrator | + config_drive = true 2026-03-03 00:02:28.442180 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.442278 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.442318 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-03 00:02:28.442324 | orchestrator | + force_delete = false 2026-03-03 00:02:28.442329 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.442334 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.442487 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.442495 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.442500 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.442506 | orchestrator | + name = "testbed-node-4" 2026-03-03 00:02:28.442605 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.442611 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.442617 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.442632 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.442637 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.442755 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-03 00:02:28.442855 | orchestrator | 2026-03-03 00:02:28.442888 | orchestrator | + block_device { 2026-03-03 00:02:28.442908 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.442924 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.442939 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.443047 | orchestrator | + multiattach = false 2026-03-03 00:02:28.443185 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.443193 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.443338 | orchestrator | } 2026-03-03 00:02:28.443344 | orchestrator | 2026-03-03 00:02:28.443350 | orchestrator | + network { 2026-03-03 00:02:28.443355 | orchestrator | + access_network = false 2026-03-03 00:02:28.443361 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.443366 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.443395 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.443419 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.443548 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.443579 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.443584 | orchestrator | } 2026-03-03 00:02:28.443728 | orchestrator | } 2026-03-03 00:02:28.443763 | orchestrator | 2026-03-03 00:02:28.443882 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-03 00:02:28.443972 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-03 00:02:28.443979 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-03 00:02:28.444000 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-03 00:02:28.444005 | orchestrator | + all_metadata = (known after apply) 2026-03-03 00:02:28.444031 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.444037 | orchestrator | + availability_zone = "nova" 2026-03-03 00:02:28.444042 | orchestrator | + config_drive = true 2026-03-03 00:02:28.444048 | orchestrator | + created = (known after apply) 2026-03-03 00:02:28.444065 | orchestrator | + flavor_id = (known after apply) 2026-03-03 00:02:28.444129 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-03 00:02:28.444137 | orchestrator | + force_delete = false 2026-03-03 00:02:28.444428 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-03 00:02:28.444591 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.444599 | orchestrator | + image_id = (known after apply) 2026-03-03 00:02:28.444604 | orchestrator | + image_name = (known after apply) 2026-03-03 00:02:28.444609 | orchestrator | + key_pair = "testbed" 2026-03-03 00:02:28.444639 | orchestrator | + name = "testbed-node-5" 2026-03-03 00:02:28.444645 | orchestrator | + power_state = "active" 2026-03-03 00:02:28.444650 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.444656 | orchestrator | + security_groups = (known after apply) 2026-03-03 00:02:28.444724 | orchestrator | + stop_before_destroy = false 2026-03-03 00:02:28.444755 | orchestrator | + updated = (known after apply) 2026-03-03 00:02:28.444862 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-03 00:02:28.444868 | orchestrator | 2026-03-03 00:02:28.444899 | orchestrator | + block_device { 2026-03-03 00:02:28.444923 | orchestrator | + boot_index = 0 2026-03-03 00:02:28.444948 | orchestrator | + delete_on_termination = false 2026-03-03 00:02:28.445075 | orchestrator | + destination_type = "volume" 2026-03-03 00:02:28.445246 | orchestrator | + multiattach = false 2026-03-03 00:02:28.445253 | orchestrator | + source_type = "volume" 2026-03-03 00:02:28.445258 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.445263 | orchestrator | } 2026-03-03 00:02:28.445268 | orchestrator | 2026-03-03 00:02:28.445378 | orchestrator | + network { 2026-03-03 00:02:28.445402 | orchestrator | + access_network = false 2026-03-03 00:02:28.445445 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-03 00:02:28.445504 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-03 00:02:28.445565 | orchestrator | + mac = (known after apply) 2026-03-03 00:02:28.445572 | orchestrator | + name = (known after apply) 2026-03-03 00:02:28.445690 | orchestrator | + port = (known after apply) 2026-03-03 00:02:28.445845 | orchestrator | + uuid = (known after apply) 2026-03-03 00:02:28.445876 | orchestrator | } 2026-03-03 00:02:28.445882 | orchestrator | } 2026-03-03 00:02:28.445887 | orchestrator | 2026-03-03 00:02:28.445893 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-03 00:02:28.445973 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-03 00:02:28.445980 | orchestrator | + fingerprint = (known after apply) 2026-03-03 00:02:28.446000 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.446006 | orchestrator | + name = "testbed" 2026-03-03 00:02:28.446165 | orchestrator | + private_key = (sensitive value) 2026-03-03 00:02:28.446250 | orchestrator | + public_key = (known after apply) 2026-03-03 00:02:28.446272 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.446291 | orchestrator | + user_id = (known after apply) 2026-03-03 00:02:28.446311 | orchestrator | } 2026-03-03 00:02:28.446332 | orchestrator | 2026-03-03 00:02:28.446354 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-03 00:02:28.446375 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.446435 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.446447 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.446458 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.446469 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.446479 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.446490 | orchestrator | } 2026-03-03 00:02:28.446501 | orchestrator | 2026-03-03 00:02:28.446512 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-03 00:02:28.446523 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.446534 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.446545 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.446555 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.446566 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.446576 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.446587 | orchestrator | } 2026-03-03 00:02:28.446597 | orchestrator | 2026-03-03 00:02:28.446608 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-03 00:02:28.446619 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.446630 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.446640 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.446651 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.446662 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.446672 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.446683 | orchestrator | } 2026-03-03 00:02:28.446693 | orchestrator | 2026-03-03 00:02:28.446752 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-03 00:02:28.446766 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.446776 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.446787 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.446798 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.446808 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.446819 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.446829 | orchestrator | } 2026-03-03 00:02:28.446840 | orchestrator | 2026-03-03 00:02:28.446851 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-03 00:02:28.446862 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.446873 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.446883 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.446894 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.446918 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.446930 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.446941 | orchestrator | } 2026-03-03 00:02:28.446951 | orchestrator | 2026-03-03 00:02:28.446962 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-03 00:02:28.446973 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.446983 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.446994 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.447004 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.447015 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.447025 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.447035 | orchestrator | } 2026-03-03 00:02:28.447049 | orchestrator | 2026-03-03 00:02:28.447075 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-03 00:02:28.447097 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.447114 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.447130 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.447148 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.447165 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.447198 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.447215 | orchestrator | } 2026-03-03 00:02:28.447233 | orchestrator | 2026-03-03 00:02:28.447250 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-03 00:02:28.447266 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.447284 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.447302 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.447321 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.447339 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.447358 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.447375 | orchestrator | } 2026-03-03 00:02:28.447394 | orchestrator | 2026-03-03 00:02:28.447413 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-03 00:02:28.447430 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-03 00:02:28.447449 | orchestrator | + device = (known after apply) 2026-03-03 00:02:28.447467 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.447485 | orchestrator | + instance_id = (known after apply) 2026-03-03 00:02:28.447504 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.447522 | orchestrator | + volume_id = (known after apply) 2026-03-03 00:02:28.447541 | orchestrator | } 2026-03-03 00:02:28.447557 | orchestrator | 2026-03-03 00:02:28.447568 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-03 00:02:28.447580 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-03 00:02:28.447591 | orchestrator | + fixed_ip = (known after apply) 2026-03-03 00:02:28.447602 | orchestrator | + floating_ip = (known after apply) 2026-03-03 00:02:28.447613 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.447624 | orchestrator | + port_id = (known after apply) 2026-03-03 00:02:28.447635 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.447646 | orchestrator | } 2026-03-03 00:02:28.447657 | orchestrator | 2026-03-03 00:02:28.447668 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-03 00:02:28.447679 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-03 00:02:28.447691 | orchestrator | + address = (known after apply) 2026-03-03 00:02:28.447731 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.447748 | orchestrator | + dns_domain = (known after apply) 2026-03-03 00:02:28.447759 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.447770 | orchestrator | + fixed_ip = (known after apply) 2026-03-03 00:02:28.447780 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.447791 | orchestrator | + pool = "public" 2026-03-03 00:02:28.447802 | orchestrator | + port_id = (known after apply) 2026-03-03 00:02:28.447827 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.447838 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.447849 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.447860 | orchestrator | } 2026-03-03 00:02:28.447871 | orchestrator | 2026-03-03 00:02:28.447881 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-03 00:02:28.447892 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-03 00:02:28.447903 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.447913 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.447924 | orchestrator | + availability_zone_hints = [ 2026-03-03 00:02:28.447935 | orchestrator | + "nova", 2026-03-03 00:02:28.447946 | orchestrator | ] 2026-03-03 00:02:28.447961 | orchestrator | + dns_domain = (known after apply) 2026-03-03 00:02:28.447979 | orchestrator | + external = (known after apply) 2026-03-03 00:02:28.447997 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.448037 | orchestrator | + mtu = (known after apply) 2026-03-03 00:02:28.448056 | orchestrator | + name = "net-testbed-management" 2026-03-03 00:02:28.448074 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.448097 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.448108 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.448119 | orchestrator | + shared = (known after apply) 2026-03-03 00:02:28.448130 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.448141 | orchestrator | + transparent_vlan = (known after apply) 2026-03-03 00:02:28.448151 | orchestrator | 2026-03-03 00:02:28.448177 | orchestrator | + segments (known after apply) 2026-03-03 00:02:28.448188 | orchestrator | } 2026-03-03 00:02:28.448198 | orchestrator | 2026-03-03 00:02:28.448209 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-03 00:02:28.448220 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-03 00:02:28.448231 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.448241 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.448252 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.448273 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.448284 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.448295 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.448306 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.448317 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.448327 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.448338 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.448349 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.448359 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.448370 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.448380 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.448390 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.448401 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.448412 | orchestrator | 2026-03-03 00:02:28.448422 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.448433 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.448444 | orchestrator | } 2026-03-03 00:02:28.448455 | orchestrator | 2026-03-03 00:02:28.448466 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.448479 | orchestrator | 2026-03-03 00:02:28.448498 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.453460 | orchestrator | + ip_address = "192.168.16.5" 2026-03-03 00:02:28.453489 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.453501 | orchestrator | } 2026-03-03 00:02:28.453512 | orchestrator | } 2026-03-03 00:02:28.453523 | orchestrator | 2026-03-03 00:02:28.453534 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-03 00:02:28.453546 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-03 00:02:28.453557 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.453569 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.453580 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.453590 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.453601 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.453612 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.453622 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.453633 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.453644 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.453654 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.453665 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.453676 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.453686 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.453697 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.453809 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.453822 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.453833 | orchestrator | 2026-03-03 00:02:28.453844 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.453854 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-03 00:02:28.453865 | orchestrator | } 2026-03-03 00:02:28.453876 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.453887 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.453897 | orchestrator | } 2026-03-03 00:02:28.453908 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.453918 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-03 00:02:28.453929 | orchestrator | } 2026-03-03 00:02:28.453940 | orchestrator | 2026-03-03 00:02:28.453950 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.453961 | orchestrator | 2026-03-03 00:02:28.453971 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.453982 | orchestrator | + ip_address = "192.168.16.10" 2026-03-03 00:02:28.453993 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.454003 | orchestrator | } 2026-03-03 00:02:28.454014 | orchestrator | } 2026-03-03 00:02:28.454059 | orchestrator | 2026-03-03 00:02:28.454070 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-03 00:02:28.454081 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-03 00:02:28.454092 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.454103 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.454113 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.454124 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.454157 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.454169 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.454180 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.454191 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.454201 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.454212 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.454222 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.454233 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.454243 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.454254 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.454265 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.454275 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.454286 | orchestrator | 2026-03-03 00:02:28.454297 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.454307 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-03 00:02:28.454317 | orchestrator | } 2026-03-03 00:02:28.454326 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.454335 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.454345 | orchestrator | } 2026-03-03 00:02:28.454354 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.454364 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-03 00:02:28.454373 | orchestrator | } 2026-03-03 00:02:28.454383 | orchestrator | 2026-03-03 00:02:28.454392 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.454401 | orchestrator | 2026-03-03 00:02:28.454411 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.454420 | orchestrator | + ip_address = "192.168.16.11" 2026-03-03 00:02:28.454430 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.454439 | orchestrator | } 2026-03-03 00:02:28.454448 | orchestrator | } 2026-03-03 00:02:28.454458 | orchestrator | 2026-03-03 00:02:28.454467 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-03 00:02:28.454477 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-03 00:02:28.454487 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.454497 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.454507 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.454516 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.454532 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.454542 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.454551 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.454561 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.454580 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.454590 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.454599 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.454615 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.454631 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.454646 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.454661 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.454676 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.454691 | orchestrator | 2026-03-03 00:02:28.454766 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.454778 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-03 00:02:28.454787 | orchestrator | } 2026-03-03 00:02:28.454797 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.454806 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.454815 | orchestrator | } 2026-03-03 00:02:28.454825 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.454834 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-03 00:02:28.454844 | orchestrator | } 2026-03-03 00:02:28.454853 | orchestrator | 2026-03-03 00:02:28.454862 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.454872 | orchestrator | 2026-03-03 00:02:28.454881 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.454890 | orchestrator | + ip_address = "192.168.16.12" 2026-03-03 00:02:28.454900 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.454909 | orchestrator | } 2026-03-03 00:02:28.454918 | orchestrator | } 2026-03-03 00:02:28.454928 | orchestrator | 2026-03-03 00:02:28.454937 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-03 00:02:28.454946 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-03 00:02:28.454956 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.454965 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.454974 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.454984 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.454993 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.455002 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.455011 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.455021 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.455030 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.455039 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.455049 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.455058 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.455067 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.455077 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.455085 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.455093 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.455101 | orchestrator | 2026-03-03 00:02:28.455108 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455116 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-03 00:02:28.455124 | orchestrator | } 2026-03-03 00:02:28.455132 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455139 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.455147 | orchestrator | } 2026-03-03 00:02:28.455154 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455162 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-03 00:02:28.455170 | orchestrator | } 2026-03-03 00:02:28.455177 | orchestrator | 2026-03-03 00:02:28.455191 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.455199 | orchestrator | 2026-03-03 00:02:28.455207 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.455214 | orchestrator | + ip_address = "192.168.16.13" 2026-03-03 00:02:28.455222 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.455230 | orchestrator | } 2026-03-03 00:02:28.455237 | orchestrator | } 2026-03-03 00:02:28.455245 | orchestrator | 2026-03-03 00:02:28.455252 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-03 00:02:28.455260 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-03 00:02:28.455268 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.455276 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.455290 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.455298 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.455306 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.455313 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.455321 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.455328 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.455336 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.455344 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.455351 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.455359 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.455367 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.455375 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.455382 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.455391 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.455408 | orchestrator | 2026-03-03 00:02:28.455420 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455434 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-03 00:02:28.455448 | orchestrator | } 2026-03-03 00:02:28.455460 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455473 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.455481 | orchestrator | } 2026-03-03 00:02:28.455489 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455497 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-03 00:02:28.455504 | orchestrator | } 2026-03-03 00:02:28.455512 | orchestrator | 2026-03-03 00:02:28.455520 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.455528 | orchestrator | 2026-03-03 00:02:28.455535 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.455543 | orchestrator | + ip_address = "192.168.16.14" 2026-03-03 00:02:28.455551 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.455558 | orchestrator | } 2026-03-03 00:02:28.455566 | orchestrator | } 2026-03-03 00:02:28.455574 | orchestrator | 2026-03-03 00:02:28.455582 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-03 00:02:28.455590 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-03 00:02:28.455597 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.455605 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-03 00:02:28.455613 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-03 00:02:28.455621 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.455628 | orchestrator | + device_id = (known after apply) 2026-03-03 00:02:28.455636 | orchestrator | + device_owner = (known after apply) 2026-03-03 00:02:28.455644 | orchestrator | + dns_assignment = (known after apply) 2026-03-03 00:02:28.455651 | orchestrator | + dns_name = (known after apply) 2026-03-03 00:02:28.455659 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.455667 | orchestrator | + mac_address = (known after apply) 2026-03-03 00:02:28.455675 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.455682 | orchestrator | + port_security_enabled = (known after apply) 2026-03-03 00:02:28.455690 | orchestrator | + qos_policy_id = (known after apply) 2026-03-03 00:02:28.455728 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.455737 | orchestrator | + security_group_ids = (known after apply) 2026-03-03 00:02:28.455745 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.455753 | orchestrator | 2026-03-03 00:02:28.455761 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455769 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-03 00:02:28.455777 | orchestrator | } 2026-03-03 00:02:28.455784 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455792 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-03 00:02:28.455800 | orchestrator | } 2026-03-03 00:02:28.455808 | orchestrator | + allowed_address_pairs { 2026-03-03 00:02:28.455816 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-03 00:02:28.455823 | orchestrator | } 2026-03-03 00:02:28.455831 | orchestrator | 2026-03-03 00:02:28.455844 | orchestrator | + binding (known after apply) 2026-03-03 00:02:28.455852 | orchestrator | 2026-03-03 00:02:28.455860 | orchestrator | + fixed_ip { 2026-03-03 00:02:28.455868 | orchestrator | + ip_address = "192.168.16.15" 2026-03-03 00:02:28.455875 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.455883 | orchestrator | } 2026-03-03 00:02:28.455891 | orchestrator | } 2026-03-03 00:02:28.455899 | orchestrator | 2026-03-03 00:02:28.455907 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-03 00:02:28.455915 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-03 00:02:28.455923 | orchestrator | + force_destroy = false 2026-03-03 00:02:28.455930 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.455938 | orchestrator | + port_id = (known after apply) 2026-03-03 00:02:28.455946 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.455954 | orchestrator | + router_id = (known after apply) 2026-03-03 00:02:28.455961 | orchestrator | + subnet_id = (known after apply) 2026-03-03 00:02:28.455969 | orchestrator | } 2026-03-03 00:02:28.455977 | orchestrator | 2026-03-03 00:02:28.455985 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-03 00:02:28.455993 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-03 00:02:28.456000 | orchestrator | + admin_state_up = (known after apply) 2026-03-03 00:02:28.456008 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.456016 | orchestrator | + availability_zone_hints = [ 2026-03-03 00:02:28.456024 | orchestrator | + "nova", 2026-03-03 00:02:28.456032 | orchestrator | ] 2026-03-03 00:02:28.456040 | orchestrator | + distributed = (known after apply) 2026-03-03 00:02:28.456047 | orchestrator | + enable_snat = (known after apply) 2026-03-03 00:02:28.456055 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-03 00:02:28.456063 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-03 00:02:28.456071 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456079 | orchestrator | + name = "testbed" 2026-03-03 00:02:28.456086 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456094 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456102 | orchestrator | 2026-03-03 00:02:28.456110 | orchestrator | + external_fixed_ip (known after apply) 2026-03-03 00:02:28.456118 | orchestrator | } 2026-03-03 00:02:28.456126 | orchestrator | 2026-03-03 00:02:28.456134 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-03 00:02:28.456142 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-03 00:02:28.456150 | orchestrator | + description = "ssh" 2026-03-03 00:02:28.456158 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456166 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456179 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456187 | orchestrator | + port_range_max = 22 2026-03-03 00:02:28.456195 | orchestrator | + port_range_min = 22 2026-03-03 00:02:28.456203 | orchestrator | + protocol = "tcp" 2026-03-03 00:02:28.456211 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456224 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456232 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456239 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.456247 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456255 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456263 | orchestrator | } 2026-03-03 00:02:28.456271 | orchestrator | 2026-03-03 00:02:28.456279 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-03 00:02:28.456286 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-03 00:02:28.456294 | orchestrator | + description = "wireguard" 2026-03-03 00:02:28.456302 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456310 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456318 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456326 | orchestrator | + port_range_max = 51820 2026-03-03 00:02:28.456333 | orchestrator | + port_range_min = 51820 2026-03-03 00:02:28.456341 | orchestrator | + protocol = "udp" 2026-03-03 00:02:28.456349 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456357 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456364 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456372 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.456380 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456388 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456395 | orchestrator | } 2026-03-03 00:02:28.456403 | orchestrator | 2026-03-03 00:02:28.456411 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-03 00:02:28.456419 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-03 00:02:28.456427 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456435 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456442 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456450 | orchestrator | + protocol = "tcp" 2026-03-03 00:02:28.456458 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456466 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456473 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456481 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-03 00:02:28.456489 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456496 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456504 | orchestrator | } 2026-03-03 00:02:28.456512 | orchestrator | 2026-03-03 00:02:28.456520 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-03 00:02:28.456528 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-03 00:02:28.456536 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456544 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456551 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456559 | orchestrator | + protocol = "udp" 2026-03-03 00:02:28.456567 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456575 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456583 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456590 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-03 00:02:28.456598 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456606 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456614 | orchestrator | } 2026-03-03 00:02:28.456621 | orchestrator | 2026-03-03 00:02:28.456629 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-03 00:02:28.456643 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-03 00:02:28.456651 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456659 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456667 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456674 | orchestrator | + protocol = "icmp" 2026-03-03 00:02:28.456682 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456690 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456698 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456725 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.456733 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456741 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456748 | orchestrator | } 2026-03-03 00:02:28.456756 | orchestrator | 2026-03-03 00:02:28.456764 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-03 00:02:28.456772 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-03 00:02:28.456779 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456787 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456795 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456803 | orchestrator | + protocol = "tcp" 2026-03-03 00:02:28.456810 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456818 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456830 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456838 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.456846 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456854 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456862 | orchestrator | } 2026-03-03 00:02:28.456869 | orchestrator | 2026-03-03 00:02:28.456877 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-03 00:02:28.456890 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-03 00:02:28.456898 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.456906 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.456914 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.456921 | orchestrator | + protocol = "udp" 2026-03-03 00:02:28.456929 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.456937 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.456944 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.456952 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.456960 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.456968 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.456976 | orchestrator | } 2026-03-03 00:02:28.456983 | orchestrator | 2026-03-03 00:02:28.456991 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-03 00:02:28.456999 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-03 00:02:28.457007 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.457018 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.457026 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457034 | orchestrator | + protocol = "icmp" 2026-03-03 00:02:28.457042 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.457049 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.457057 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.457065 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.457073 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.457080 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.457093 | orchestrator | } 2026-03-03 00:02:28.457101 | orchestrator | 2026-03-03 00:02:28.457109 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-03 00:02:28.457116 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-03 00:02:28.457124 | orchestrator | + description = "vrrp" 2026-03-03 00:02:28.457132 | orchestrator | + direction = "ingress" 2026-03-03 00:02:28.457140 | orchestrator | + ethertype = "IPv4" 2026-03-03 00:02:28.457147 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457155 | orchestrator | + protocol = "112" 2026-03-03 00:02:28.457163 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.457170 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-03 00:02:28.457178 | orchestrator | + remote_group_id = (known after apply) 2026-03-03 00:02:28.457185 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-03 00:02:28.457193 | orchestrator | + security_group_id = (known after apply) 2026-03-03 00:02:28.457201 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.457208 | orchestrator | } 2026-03-03 00:02:28.457216 | orchestrator | 2026-03-03 00:02:28.457224 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-03 00:02:28.457232 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-03 00:02:28.457240 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.457247 | orchestrator | + description = "management security group" 2026-03-03 00:02:28.457255 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457263 | orchestrator | + name = "testbed-management" 2026-03-03 00:02:28.457270 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.457278 | orchestrator | + stateful = (known after apply) 2026-03-03 00:02:28.457286 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.457293 | orchestrator | } 2026-03-03 00:02:28.457301 | orchestrator | 2026-03-03 00:02:28.457309 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-03 00:02:28.457316 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-03 00:02:28.457324 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.457332 | orchestrator | + description = "node security group" 2026-03-03 00:02:28.457339 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457347 | orchestrator | + name = "testbed-node" 2026-03-03 00:02:28.457355 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.457363 | orchestrator | + stateful = (known after apply) 2026-03-03 00:02:28.457370 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.457378 | orchestrator | } 2026-03-03 00:02:28.457386 | orchestrator | 2026-03-03 00:02:28.457394 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-03 00:02:28.457401 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-03 00:02:28.457409 | orchestrator | + all_tags = (known after apply) 2026-03-03 00:02:28.457417 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-03 00:02:28.457425 | orchestrator | + dns_nameservers = [ 2026-03-03 00:02:28.457432 | orchestrator | + "8.8.8.8", 2026-03-03 00:02:28.457440 | orchestrator | + "9.9.9.9", 2026-03-03 00:02:28.457448 | orchestrator | ] 2026-03-03 00:02:28.457456 | orchestrator | + enable_dhcp = true 2026-03-03 00:02:28.457464 | orchestrator | + gateway_ip = (known after apply) 2026-03-03 00:02:28.457471 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457479 | orchestrator | + ip_version = 4 2026-03-03 00:02:28.457487 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-03 00:02:28.457494 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-03 00:02:28.457502 | orchestrator | + name = "subnet-testbed-management" 2026-03-03 00:02:28.457510 | orchestrator | + network_id = (known after apply) 2026-03-03 00:02:28.457518 | orchestrator | + no_gateway = false 2026-03-03 00:02:28.457525 | orchestrator | + region = (known after apply) 2026-03-03 00:02:28.457533 | orchestrator | + service_types = (known after apply) 2026-03-03 00:02:28.457545 | orchestrator | + tenant_id = (known after apply) 2026-03-03 00:02:28.457553 | orchestrator | 2026-03-03 00:02:28.457561 | orchestrator | + allocation_pool { 2026-03-03 00:02:28.457569 | orchestrator | + end = "192.168.31.250" 2026-03-03 00:02:28.457576 | orchestrator | + start = "192.168.31.200" 2026-03-03 00:02:28.457584 | orchestrator | } 2026-03-03 00:02:28.457592 | orchestrator | } 2026-03-03 00:02:28.457599 | orchestrator | 2026-03-03 00:02:28.457607 | orchestrator | # terraform_data.image will be created 2026-03-03 00:02:28.457615 | orchestrator | + resource "terraform_data" "image" { 2026-03-03 00:02:28.457622 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457630 | orchestrator | + input = "Ubuntu 24.04" 2026-03-03 00:02:28.457638 | orchestrator | + output = (known after apply) 2026-03-03 00:02:28.457646 | orchestrator | } 2026-03-03 00:02:28.457653 | orchestrator | 2026-03-03 00:02:28.457661 | orchestrator | # terraform_data.image_node will be created 2026-03-03 00:02:28.457672 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-03 00:02:28.457680 | orchestrator | + id = (known after apply) 2026-03-03 00:02:28.457688 | orchestrator | + input = "Ubuntu 24.04" 2026-03-03 00:02:28.457696 | orchestrator | + output = (known after apply) 2026-03-03 00:02:28.457752 | orchestrator | } 2026-03-03 00:02:28.457767 | orchestrator | 2026-03-03 00:02:28.457776 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-03 00:02:28.457783 | orchestrator | 2026-03-03 00:02:28.457790 | orchestrator | Changes to Outputs: 2026-03-03 00:02:28.457797 | orchestrator | + manager_address = (sensitive value) 2026-03-03 00:02:28.457803 | orchestrator | + private_key = (sensitive value) 2026-03-03 00:02:28.528503 | orchestrator | terraform_data.image: Creating... 2026-03-03 00:02:28.528847 | orchestrator | terraform_data.image: Creation complete after 0s [id=9c291027-d622-b380-6f8e-c0a70fa754d6] 2026-03-03 00:02:28.674586 | orchestrator | terraform_data.image_node: Creating... 2026-03-03 00:02:28.674681 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=b8276895-482c-a15b-ab01-584f74d690e1] 2026-03-03 00:02:28.684276 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-03 00:02:28.692800 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-03 00:02:28.692892 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-03 00:02:28.697575 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-03 00:02:28.698950 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-03 00:02:28.700146 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-03 00:02:28.701451 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-03 00:02:28.722068 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-03 00:02:28.722116 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-03 00:02:28.722122 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-03 00:02:29.151292 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-03 00:02:29.158193 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-03 00:02:29.676886 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=0352981f-81a4-49fd-ab2e-0d8e91aed6de] 2026-03-03 00:02:30.272263 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-03 00:02:30.272760 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-03 00:02:30.272778 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-03 00:02:30.272783 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-03 00:02:30.272788 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-03 00:02:32.341427 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=307e1601-9544-4595-9bde-10bb8c02a301] 2026-03-03 00:02:32.344576 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=dcb1f927-210f-415f-93de-fe80b62d5dbc] 2026-03-03 00:02:32.350580 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-03 00:02:32.357607 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-03 00:02:32.367162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=8acbf85b-6b93-492a-b370-4408c7f2c4d8] 2026-03-03 00:02:32.372976 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-03 00:02:32.394044 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=bb2822fc-3ed5-43a4-912e-7bd302443dc4] 2026-03-03 00:02:32.418255 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=2c5ded08-cf26-49fb-8fcb-b7f7b62b452d] 2026-03-03 00:02:32.418305 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=0c164c56-6d34-4cb4-9884-5e599fdbb702] 2026-03-03 00:02:32.662585 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-03 00:02:32.666224 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-03 00:02:32.677645 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-03 00:02:32.695604 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=f1b88ce7-718e-41a1-adfb-e8e019701473] 2026-03-03 00:02:32.698095 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=bba38cc5-8585-4a2f-8505-6987b8a4c361] 2026-03-03 00:02:32.704068 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-03 00:02:32.717540 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-03 00:02:32.722099 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=5c3cef04d2226f524f4afb679c6c8ff4bab2c726] 2026-03-03 00:02:32.731256 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-03 00:02:32.733587 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=bf883d86-e883-4c70-9a49-1cd6f6186c53] 2026-03-03 00:02:32.737898 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=636abc8785a010a7e0ecbcc50cc667dc1cc4746e] 2026-03-03 00:02:33.214277 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=b536afd9-feca-47a6-88b0-45d4e217eb34] 2026-03-03 00:02:33.901251 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=2b2f9f00-e9e9-4ed8-88a9-2065d486cd67] 2026-03-03 00:02:33.906589 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-03 00:02:35.775996 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=da145857-97b8-46c1-bd58-274a585c5d78] 2026-03-03 00:02:35.801467 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=0ab8a212-6a66-4584-abca-b2e7ece64247] 2026-03-03 00:02:36.116007 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=db05f358-997f-4d59-a241-e67298a52f64] 2026-03-03 00:02:36.121456 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=f28e64b4-5f1f-4b94-8837-d9b394718ec8] 2026-03-03 00:02:36.183554 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=0e510bd4-fcec-4a2e-a350-f93d70b29d0d] 2026-03-03 00:02:36.188752 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2] 2026-03-03 00:02:37.523618 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=9789fbbb-5b68-449d-a4e1-070df985eae8] 2026-03-03 00:02:37.529954 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-03 00:02:37.533145 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-03 00:02:37.533820 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-03 00:02:37.852236 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b0cd69da-f3c4-4521-8184-f9ea85427423] 2026-03-03 00:02:37.859023 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-03 00:02:37.861475 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-03 00:02:37.861540 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-03 00:02:37.861583 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-03 00:02:37.862504 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-03 00:02:37.865797 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-03 00:02:38.200485 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=a481a765-efd4-4c4d-a019-63d70c15b1cc] 2026-03-03 00:02:38.254203 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=239259e3-c735-40fc-9984-1bcdc3e04a93] 2026-03-03 00:02:38.260336 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-03 00:02:38.260426 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-03 00:02:38.262118 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-03 00:02:38.266351 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-03 00:02:38.603476 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=4945eaf1-b57b-43d8-9eb9-ba2699ed64c0] 2026-03-03 00:02:38.613066 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-03 00:02:38.803908 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=1b88a272-8108-4864-b286-555aac3ac743] 2026-03-03 00:02:38.817037 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=456a4da5-ee27-487c-89e0-786a362fa1cc] 2026-03-03 00:02:38.819307 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-03 00:02:38.828880 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-03 00:02:39.028610 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8e3ec946-d0ce-4b6c-aa81-1e7c6be70c5a] 2026-03-03 00:02:39.040745 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-03 00:02:39.196464 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=fd8b4bf8-6e8d-43db-8896-57f752ec67aa] 2026-03-03 00:02:39.201616 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-03 00:02:39.260973 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=3674f32f-312b-4f0f-a558-1deb73f263e3] 2026-03-03 00:02:39.271650 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-03 00:02:39.355701 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=47cbe04b-de01-49cd-9786-4ba7840458a1] 2026-03-03 00:02:39.645203 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=628d799a-9598-4f53-ae34-85d2cb208eab] 2026-03-03 00:02:39.695961 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=fee34270-06ec-48d8-b386-b2d70b89ffdd] 2026-03-03 00:02:39.872502 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=59fe376f-a11b-424e-9be3-79582818ab96] 2026-03-03 00:02:39.964362 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=e9d84028-844f-4149-ad47-37527bc4d938] 2026-03-03 00:02:40.001275 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1e04a5f6-b3e2-4dcd-bc7a-a4a42e7235a7] 2026-03-03 00:02:40.002766 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7a7b744d-6042-427b-8d29-017ee1e3dc66] 2026-03-03 00:02:40.113358 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=9737e3b5-c03d-48c3-93af-d3d6fc78248c] 2026-03-03 00:02:40.146989 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=a4fbd085-c677-46e4-b42b-6ce47994120f] 2026-03-03 00:02:40.881505 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=752d7152-cf35-43bc-87b8-012e09d09977] 2026-03-03 00:02:41.012417 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-03 00:02:41.012488 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-03 00:02:41.012501 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-03 00:02:41.012521 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-03 00:02:41.012531 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-03 00:02:41.012541 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-03 00:02:41.012551 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-03 00:02:42.759332 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=87aada0f-5281-4451-b75b-bdf7b0d177c8] 2026-03-03 00:02:42.771315 | orchestrator | local_file.inventory: Creating... 2026-03-03 00:02:42.773790 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-03 00:02:42.774973 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-03 00:02:42.779103 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6a2c3f8860e45ece33892d09927bacbf01b63951] 2026-03-03 00:02:42.779334 | orchestrator | local_file.inventory: Creation complete after 0s [id=e84b33b9596f0a38a4ec6267ec0d2235791e69d3] 2026-03-03 00:02:43.877727 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=87aada0f-5281-4451-b75b-bdf7b0d177c8] 2026-03-03 00:02:50.915346 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-03 00:02:50.915583 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-03 00:02:50.918893 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-03 00:02:50.919016 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-03 00:02:50.928582 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-03 00:02:50.930990 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-03 00:03:00.915660 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-03 00:03:00.916790 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-03 00:03:00.918915 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-03 00:03:00.919125 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-03 00:03:00.929190 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-03 00:03:00.931415 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-03 00:03:10.916967 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-03 00:03:10.917066 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-03 00:03:10.919276 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-03 00:03:10.919384 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-03 00:03:10.929633 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-03 00:03:10.931899 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-03 00:03:20.922309 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-03 00:03:20.922407 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-03 00:03:20.922417 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-03 00:03:20.922424 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-03 00:03:20.930606 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-03 00:03:20.932855 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-03 00:03:21.444072 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 40s [id=154f9450-8cdb-483c-a21b-a9ae2a8e3536] 2026-03-03 00:03:21.720230 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=ea2cf404-2dd2-426d-b9c5-5fbfa56e4b4a] 2026-03-03 00:03:22.196984 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=c498dc9f-d5e6-4001-8a2c-e4daeeb31250] 2026-03-03 00:03:22.302865 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=8530cd4a-62d8-46e6-801c-5d11b931e0ab] 2026-03-03 00:03:22.483094 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=62509bff-c6d7-4de0-89f1-fe8298732b66] 2026-03-03 00:03:30.930327 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-03 00:03:33.151298 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 52s [id=37bafaf2-2418-4b8f-8e70-ccaa4518b973] 2026-03-03 00:03:33.170884 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-03 00:03:33.179353 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3041825726096564114] 2026-03-03 00:03:33.184007 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-03 00:03:33.184095 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-03 00:03:33.184115 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-03 00:03:33.184922 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-03 00:03:33.193422 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-03 00:03:33.197238 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-03 00:03:33.203920 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-03 00:03:33.222117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-03 00:03:33.224502 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-03 00:03:33.231606 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-03 00:03:36.991843 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=c498dc9f-d5e6-4001-8a2c-e4daeeb31250/8acbf85b-6b93-492a-b370-4408c7f2c4d8] 2026-03-03 00:03:37.019666 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=154f9450-8cdb-483c-a21b-a9ae2a8e3536/bb2822fc-3ed5-43a4-912e-7bd302443dc4] 2026-03-03 00:03:37.026813 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=ea2cf404-2dd2-426d-b9c5-5fbfa56e4b4a/bf883d86-e883-4c70-9a49-1cd6f6186c53] 2026-03-03 00:03:37.048646 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=c498dc9f-d5e6-4001-8a2c-e4daeeb31250/f1b88ce7-718e-41a1-adfb-e8e019701473] 2026-03-03 00:03:37.083052 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=ea2cf404-2dd2-426d-b9c5-5fbfa56e4b4a/307e1601-9544-4595-9bde-10bb8c02a301] 2026-03-03 00:03:37.138421 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=154f9450-8cdb-483c-a21b-a9ae2a8e3536/2c5ded08-cf26-49fb-8fcb-b7f7b62b452d] 2026-03-03 00:03:43.156738 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=154f9450-8cdb-483c-a21b-a9ae2a8e3536/dcb1f927-210f-415f-93de-fe80b62d5dbc] 2026-03-03 00:03:43.192320 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=c498dc9f-d5e6-4001-8a2c-e4daeeb31250/0c164c56-6d34-4cb4-9884-5e599fdbb702] 2026-03-03 00:03:43.207940 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Still creating... [10s elapsed] 2026-03-03 00:03:43.233057 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-03 00:03:43.234943 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=ea2cf404-2dd2-426d-b9c5-5fbfa56e4b4a/bba38cc5-8585-4a2f-8505-6987b8a4c361] 2026-03-03 00:03:53.242602 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-03 00:03:53.719804 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=058a3b70-046e-4b45-b851-7583aacb7595] 2026-03-03 00:03:53.737281 | orchestrator | 2026-03-03 00:03:53.737383 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-03 00:03:53.737395 | orchestrator | 2026-03-03 00:03:53.737404 | orchestrator | Outputs: 2026-03-03 00:03:53.737411 | orchestrator | 2026-03-03 00:03:53.737419 | orchestrator | manager_address = 2026-03-03 00:03:53.737427 | orchestrator | private_key = 2026-03-03 00:03:53.843487 | orchestrator | ok: Runtime: 0:01:29.821341 2026-03-03 00:03:53.873863 | 2026-03-03 00:03:53.874011 | TASK [Create infrastructure (stable)] 2026-03-03 00:03:54.408143 | orchestrator | skipping: Conditional result was False 2026-03-03 00:03:54.428033 | 2026-03-03 00:03:54.428210 | TASK [Fetch manager address] 2026-03-03 00:03:54.921485 | orchestrator | ok 2026-03-03 00:03:54.931090 | 2026-03-03 00:03:54.931244 | TASK [Set manager_host address] 2026-03-03 00:03:55.008076 | orchestrator | ok 2026-03-03 00:03:55.015702 | 2026-03-03 00:03:55.015822 | LOOP [Update ansible collections] 2026-03-03 00:03:56.181186 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-03 00:03:56.181577 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-03 00:03:56.181638 | orchestrator | Starting galaxy collection install process 2026-03-03 00:03:56.181665 | orchestrator | Process install dependency map 2026-03-03 00:03:56.181688 | orchestrator | Starting collection install process 2026-03-03 00:03:56.181710 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-03 00:03:56.181737 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-03 00:03:56.181768 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-03 00:03:56.181821 | orchestrator | ok: Item: commons Runtime: 0:00:00.788562 2026-03-03 00:03:57.258609 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-03 00:03:57.258778 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-03 00:03:57.258829 | orchestrator | Starting galaxy collection install process 2026-03-03 00:03:57.258913 | orchestrator | Process install dependency map 2026-03-03 00:03:57.258948 | orchestrator | Starting collection install process 2026-03-03 00:03:57.258981 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-03 00:03:57.259013 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-03 00:03:57.259046 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-03 00:03:57.259096 | orchestrator | ok: Item: services Runtime: 0:00:00.772186 2026-03-03 00:03:57.276605 | 2026-03-03 00:03:57.276762 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-03 00:04:07.850780 | orchestrator | ok 2026-03-03 00:04:07.860930 | 2026-03-03 00:04:07.861045 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-03 00:05:07.896483 | orchestrator | ok 2026-03-03 00:05:07.913244 | 2026-03-03 00:05:07.913455 | TASK [Fetch manager ssh hostkey] 2026-03-03 00:05:09.496288 | orchestrator | Output suppressed because no_log was given 2026-03-03 00:05:09.510102 | 2026-03-03 00:05:09.510246 | TASK [Get ssh keypair from terraform environment] 2026-03-03 00:05:10.044063 | orchestrator | ok: Runtime: 0:00:00.005705 2026-03-03 00:05:10.060976 | 2026-03-03 00:05:10.061126 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-03 00:05:10.109402 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-03 00:05:10.121651 | 2026-03-03 00:05:10.121808 | TASK [Run manager part 0] 2026-03-03 00:05:11.217801 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-03 00:05:11.271408 | orchestrator | 2026-03-03 00:05:11.271457 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-03 00:05:11.271466 | orchestrator | 2026-03-03 00:05:11.271483 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-03 00:05:12.894983 | orchestrator | ok: [testbed-manager] 2026-03-03 00:05:12.895061 | orchestrator | 2026-03-03 00:05:12.895095 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-03 00:05:12.895109 | orchestrator | 2026-03-03 00:05:12.895122 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:05:14.791816 | orchestrator | ok: [testbed-manager] 2026-03-03 00:05:14.791858 | orchestrator | 2026-03-03 00:05:14.791869 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-03 00:05:15.404980 | orchestrator | ok: [testbed-manager] 2026-03-03 00:05:15.405023 | orchestrator | 2026-03-03 00:05:15.405033 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-03 00:05:15.449826 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.449877 | orchestrator | 2026-03-03 00:05:15.449894 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-03 00:05:15.482997 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.483042 | orchestrator | 2026-03-03 00:05:15.483052 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-03 00:05:15.515147 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.515181 | orchestrator | 2026-03-03 00:05:15.515187 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-03 00:05:15.550058 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.550103 | orchestrator | 2026-03-03 00:05:15.550113 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-03 00:05:15.578576 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.578618 | orchestrator | 2026-03-03 00:05:15.578626 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-03 00:05:15.605841 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.605884 | orchestrator | 2026-03-03 00:05:15.605895 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-03 00:05:15.634134 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:05:15.634163 | orchestrator | 2026-03-03 00:05:15.634169 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-03 00:05:16.348499 | orchestrator | changed: [testbed-manager] 2026-03-03 00:05:16.348556 | orchestrator | 2026-03-03 00:05:16.348567 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-03 00:08:07.893065 | orchestrator | changed: [testbed-manager] 2026-03-03 00:08:07.893215 | orchestrator | 2026-03-03 00:08:07.893237 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-03 00:10:01.452868 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:01.452918 | orchestrator | 2026-03-03 00:10:01.452926 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-03 00:10:27.918112 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:27.918221 | orchestrator | 2026-03-03 00:10:27.918243 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-03 00:10:36.711331 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:36.711444 | orchestrator | 2026-03-03 00:10:36.711470 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-03 00:10:36.756618 | orchestrator | ok: [testbed-manager] 2026-03-03 00:10:36.756751 | orchestrator | 2026-03-03 00:10:36.756775 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-03 00:10:37.558226 | orchestrator | ok: [testbed-manager] 2026-03-03 00:10:37.558319 | orchestrator | 2026-03-03 00:10:37.558339 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-03 00:10:38.303495 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:38.303534 | orchestrator | 2026-03-03 00:10:38.303542 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-03 00:10:45.806151 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:45.806266 | orchestrator | 2026-03-03 00:10:45.806324 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-03 00:10:52.351122 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:52.351239 | orchestrator | 2026-03-03 00:10:52.351270 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-03 00:10:54.962763 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:54.962847 | orchestrator | 2026-03-03 00:10:54.962862 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-03 00:10:56.536175 | orchestrator | changed: [testbed-manager] 2026-03-03 00:10:56.536218 | orchestrator | 2026-03-03 00:10:56.536225 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-03 00:10:57.588490 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-03 00:10:57.589226 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-03 00:10:57.589270 | orchestrator | 2026-03-03 00:10:57.589289 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-03 00:10:57.633970 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-03 00:10:57.634110 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-03 00:10:57.634127 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-03 00:10:57.634141 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-03 00:11:00.771764 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-03 00:11:00.771853 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-03 00:11:00.771868 | orchestrator | 2026-03-03 00:11:00.771881 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-03 00:11:01.336154 | orchestrator | changed: [testbed-manager] 2026-03-03 00:11:01.336237 | orchestrator | 2026-03-03 00:11:01.336254 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-03 00:19:28.804011 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-03 00:19:28.804097 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-03 00:19:28.804116 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-03 00:19:28.804131 | orchestrator | 2026-03-03 00:19:28.804146 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-03 00:19:30.973948 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-03 00:19:30.974069 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-03 00:19:30.974087 | orchestrator | 2026-03-03 00:19:30.974100 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-03 00:19:30.974112 | orchestrator | 2026-03-03 00:19:30.974124 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:19:32.353725 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:32.353819 | orchestrator | 2026-03-03 00:19:32.353837 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-03 00:19:32.405254 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:32.405353 | orchestrator | 2026-03-03 00:19:32.405364 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-03 00:19:32.472263 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:32.472379 | orchestrator | 2026-03-03 00:19:32.472395 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-03 00:19:33.255823 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:33.255910 | orchestrator | 2026-03-03 00:19:33.255926 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-03 00:19:34.017936 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:34.018059 | orchestrator | 2026-03-03 00:19:34.018078 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-03 00:19:35.402171 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-03 00:19:35.402262 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-03 00:19:35.402306 | orchestrator | 2026-03-03 00:19:35.402339 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-03 00:19:36.770057 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:36.770164 | orchestrator | 2026-03-03 00:19:36.770178 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-03 00:19:38.457438 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:19:38.457507 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-03 00:19:38.457521 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:19:38.457539 | orchestrator | 2026-03-03 00:19:38.457560 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-03 00:19:38.514095 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:38.514194 | orchestrator | 2026-03-03 00:19:38.514210 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-03 00:19:38.595407 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:38.595466 | orchestrator | 2026-03-03 00:19:38.595481 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-03 00:19:39.150742 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:39.150851 | orchestrator | 2026-03-03 00:19:39.150869 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-03 00:19:39.216966 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:39.217057 | orchestrator | 2026-03-03 00:19:39.217074 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-03 00:19:40.078589 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-03 00:19:40.078681 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:40.078697 | orchestrator | 2026-03-03 00:19:40.078709 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-03 00:19:40.119569 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:40.119656 | orchestrator | 2026-03-03 00:19:40.119671 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-03 00:19:40.155608 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:40.155688 | orchestrator | 2026-03-03 00:19:40.155702 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-03 00:19:40.191095 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:40.191197 | orchestrator | 2026-03-03 00:19:40.191221 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-03 00:19:40.256442 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:40.256535 | orchestrator | 2026-03-03 00:19:40.256557 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-03 00:19:40.949921 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:40.950013 | orchestrator | 2026-03-03 00:19:40.950124 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-03 00:19:40.950147 | orchestrator | 2026-03-03 00:19:40.950168 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:19:42.251612 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:42.251646 | orchestrator | 2026-03-03 00:19:42.251652 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-03 00:19:43.255770 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:43.255871 | orchestrator | 2026-03-03 00:19:43.255887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:19:43.255899 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-03 00:19:43.255909 | orchestrator | 2026-03-03 00:19:43.713629 | orchestrator | ok: Runtime: 0:14:32.948973 2026-03-03 00:19:43.741723 | 2026-03-03 00:19:43.741924 | TASK [Point out that the log in on the manager is now possible] 2026-03-03 00:19:43.788893 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-03 00:19:43.800367 | 2026-03-03 00:19:43.800514 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-03 00:19:43.837179 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-03 00:19:43.848894 | 2026-03-03 00:19:43.849058 | TASK [Run manager part 1 + 2] 2026-03-03 00:19:44.709081 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-03 00:19:44.764410 | orchestrator | 2026-03-03 00:19:44.764457 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-03 00:19:44.764464 | orchestrator | 2026-03-03 00:19:44.764477 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:19:47.483867 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:47.483921 | orchestrator | 2026-03-03 00:19:47.483961 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-03 00:19:47.530062 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:47.530116 | orchestrator | 2026-03-03 00:19:47.530127 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-03 00:19:47.576892 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:47.576944 | orchestrator | 2026-03-03 00:19:47.576953 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-03 00:19:47.619496 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:47.619552 | orchestrator | 2026-03-03 00:19:47.619563 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-03 00:19:47.687457 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:47.687533 | orchestrator | 2026-03-03 00:19:47.687550 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-03 00:19:47.746872 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:47.746929 | orchestrator | 2026-03-03 00:19:47.746940 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-03 00:19:47.797321 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-03 00:19:47.797366 | orchestrator | 2026-03-03 00:19:47.797371 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-03 00:19:48.528717 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:48.528775 | orchestrator | 2026-03-03 00:19:48.528784 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-03 00:19:48.578717 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:19:48.578782 | orchestrator | 2026-03-03 00:19:48.578795 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-03 00:19:50.121946 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:50.122301 | orchestrator | 2026-03-03 00:19:50.122342 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-03 00:19:50.628525 | orchestrator | ok: [testbed-manager] 2026-03-03 00:19:50.628574 | orchestrator | 2026-03-03 00:19:50.628582 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-03 00:19:51.663262 | orchestrator | changed: [testbed-manager] 2026-03-03 00:19:51.663358 | orchestrator | 2026-03-03 00:19:51.663374 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-03 00:20:07.235094 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:07.235192 | orchestrator | 2026-03-03 00:20:07.235209 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-03 00:20:07.909748 | orchestrator | ok: [testbed-manager] 2026-03-03 00:20:07.909792 | orchestrator | 2026-03-03 00:20:07.909802 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-03 00:20:07.962448 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:20:07.962490 | orchestrator | 2026-03-03 00:20:07.962498 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-03 00:20:08.982473 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:08.982516 | orchestrator | 2026-03-03 00:20:08.982525 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-03 00:20:09.918074 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:09.918202 | orchestrator | 2026-03-03 00:20:09.918219 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-03 00:20:10.491708 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:10.491798 | orchestrator | 2026-03-03 00:20:10.491813 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-03 00:20:10.535603 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-03 00:20:10.535730 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-03 00:20:10.535757 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-03 00:20:10.535770 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-03 00:20:12.438420 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:12.438497 | orchestrator | 2026-03-03 00:20:12.438507 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-03 00:20:21.140808 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-03 00:20:21.140907 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-03 00:20:21.140925 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-03 00:20:21.140938 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-03 00:20:21.140957 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-03 00:20:21.140968 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-03 00:20:21.140979 | orchestrator | 2026-03-03 00:20:21.140991 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-03 00:20:22.180647 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:22.180694 | orchestrator | 2026-03-03 00:20:22.180701 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-03 00:20:22.226693 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:20:22.226737 | orchestrator | 2026-03-03 00:20:22.226746 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-03 00:20:25.136155 | orchestrator | changed: [testbed-manager] 2026-03-03 00:20:25.136198 | orchestrator | 2026-03-03 00:20:25.136207 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-03 00:20:25.183134 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:20:25.183177 | orchestrator | 2026-03-03 00:20:25.183185 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-03 00:21:52.895481 | orchestrator | changed: [testbed-manager] 2026-03-03 00:21:52.895582 | orchestrator | 2026-03-03 00:21:52.895600 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-03 00:21:53.873999 | orchestrator | ok: [testbed-manager] 2026-03-03 00:21:53.874085 | orchestrator | 2026-03-03 00:21:53.874098 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:21:53.874108 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-03 00:21:53.874117 | orchestrator | 2026-03-03 00:21:54.476117 | orchestrator | ok: Runtime: 0:02:09.796879 2026-03-03 00:21:54.492468 | 2026-03-03 00:21:54.492622 | TASK [Reboot manager] 2026-03-03 00:21:56.030441 | orchestrator | ok: Runtime: 0:00:00.888419 2026-03-03 00:21:56.047992 | 2026-03-03 00:21:56.048153 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-03 00:22:09.832835 | orchestrator | ok 2026-03-03 00:22:09.844004 | 2026-03-03 00:22:09.844141 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-03 00:23:09.902315 | orchestrator | ok 2026-03-03 00:23:09.912806 | 2026-03-03 00:23:09.912941 | TASK [Deploy manager + bootstrap nodes] 2026-03-03 00:23:12.413603 | orchestrator | 2026-03-03 00:23:12.413823 | orchestrator | # DEPLOY MANAGER 2026-03-03 00:23:12.413849 | orchestrator | 2026-03-03 00:23:12.413863 | orchestrator | + set -e 2026-03-03 00:23:12.413876 | orchestrator | + echo 2026-03-03 00:23:12.413890 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-03 00:23:12.413926 | orchestrator | + echo 2026-03-03 00:23:12.413981 | orchestrator | + cat /opt/manager-vars.sh 2026-03-03 00:23:12.417084 | orchestrator | export NUMBER_OF_NODES=6 2026-03-03 00:23:12.417132 | orchestrator | 2026-03-03 00:23:12.417145 | orchestrator | export CEPH_VERSION=reef 2026-03-03 00:23:12.417159 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-03 00:23:12.417172 | orchestrator | export MANAGER_VERSION=latest 2026-03-03 00:23:12.417195 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-03 00:23:12.417206 | orchestrator | 2026-03-03 00:23:12.417225 | orchestrator | export ARA=false 2026-03-03 00:23:12.417236 | orchestrator | export DEPLOY_MODE=manager 2026-03-03 00:23:12.417254 | orchestrator | export TEMPEST=true 2026-03-03 00:23:12.417265 | orchestrator | export IS_ZUUL=true 2026-03-03 00:23:12.417276 | orchestrator | 2026-03-03 00:23:12.417337 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:23:12.417350 | orchestrator | export EXTERNAL_API=false 2026-03-03 00:23:12.417361 | orchestrator | 2026-03-03 00:23:12.417372 | orchestrator | export IMAGE_USER=ubuntu 2026-03-03 00:23:12.417386 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-03 00:23:12.417397 | orchestrator | 2026-03-03 00:23:12.417408 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-03 00:23:12.417426 | orchestrator | 2026-03-03 00:23:12.417438 | orchestrator | + echo 2026-03-03 00:23:12.417450 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-03 00:23:12.418482 | orchestrator | ++ export INTERACTIVE=false 2026-03-03 00:23:12.418512 | orchestrator | ++ INTERACTIVE=false 2026-03-03 00:23:12.418526 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-03 00:23:12.418540 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-03 00:23:12.418722 | orchestrator | + source /opt/manager-vars.sh 2026-03-03 00:23:12.418739 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-03 00:23:12.418752 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-03 00:23:12.418764 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-03 00:23:12.418777 | orchestrator | ++ CEPH_VERSION=reef 2026-03-03 00:23:12.418841 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-03 00:23:12.418855 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-03 00:23:12.418867 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-03 00:23:12.418877 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-03 00:23:12.418889 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-03 00:23:12.418909 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-03 00:23:12.418921 | orchestrator | ++ export ARA=false 2026-03-03 00:23:12.418932 | orchestrator | ++ ARA=false 2026-03-03 00:23:12.418943 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-03 00:23:12.418954 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-03 00:23:12.418972 | orchestrator | ++ export TEMPEST=true 2026-03-03 00:23:12.418982 | orchestrator | ++ TEMPEST=true 2026-03-03 00:23:12.418993 | orchestrator | ++ export IS_ZUUL=true 2026-03-03 00:23:12.419004 | orchestrator | ++ IS_ZUUL=true 2026-03-03 00:23:12.419019 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:23:12.419031 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:23:12.419042 | orchestrator | ++ export EXTERNAL_API=false 2026-03-03 00:23:12.419053 | orchestrator | ++ EXTERNAL_API=false 2026-03-03 00:23:12.419064 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-03 00:23:12.419074 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-03 00:23:12.419092 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-03 00:23:12.419103 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-03 00:23:12.419114 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-03 00:23:12.419125 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-03 00:23:12.419140 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-03 00:23:12.469853 | orchestrator | + docker version 2026-03-03 00:23:12.574796 | orchestrator | Client: Docker Engine - Community 2026-03-03 00:23:12.574903 | orchestrator | Version: 27.5.1 2026-03-03 00:23:12.574919 | orchestrator | API version: 1.47 2026-03-03 00:23:12.574934 | orchestrator | Go version: go1.22.11 2026-03-03 00:23:12.574945 | orchestrator | Git commit: 9f9e405 2026-03-03 00:23:12.574957 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-03 00:23:12.574970 | orchestrator | OS/Arch: linux/amd64 2026-03-03 00:23:12.574980 | orchestrator | Context: default 2026-03-03 00:23:12.574989 | orchestrator | 2026-03-03 00:23:12.574999 | orchestrator | Server: Docker Engine - Community 2026-03-03 00:23:12.575009 | orchestrator | Engine: 2026-03-03 00:23:12.575019 | orchestrator | Version: 27.5.1 2026-03-03 00:23:12.575030 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-03 00:23:12.575067 | orchestrator | Go version: go1.22.11 2026-03-03 00:23:12.575078 | orchestrator | Git commit: 4c9b3b0 2026-03-03 00:23:12.575087 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-03 00:23:12.575097 | orchestrator | OS/Arch: linux/amd64 2026-03-03 00:23:12.575107 | orchestrator | Experimental: false 2026-03-03 00:23:12.575116 | orchestrator | containerd: 2026-03-03 00:23:12.575126 | orchestrator | Version: v2.2.1 2026-03-03 00:23:12.575136 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-03 00:23:12.575146 | orchestrator | runc: 2026-03-03 00:23:12.575156 | orchestrator | Version: 1.3.4 2026-03-03 00:23:12.575166 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-03 00:23:12.575175 | orchestrator | docker-init: 2026-03-03 00:23:12.575185 | orchestrator | Version: 0.19.0 2026-03-03 00:23:12.575195 | orchestrator | GitCommit: de40ad0 2026-03-03 00:23:12.577697 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-03 00:23:12.587226 | orchestrator | + set -e 2026-03-03 00:23:12.587312 | orchestrator | + source /opt/manager-vars.sh 2026-03-03 00:23:12.587321 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-03 00:23:12.587330 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-03 00:23:12.587336 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-03 00:23:12.587343 | orchestrator | ++ CEPH_VERSION=reef 2026-03-03 00:23:12.587355 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-03 00:23:12.587363 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-03 00:23:12.587370 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-03 00:23:12.587385 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-03 00:23:12.587392 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-03 00:23:12.587398 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-03 00:23:12.587405 | orchestrator | ++ export ARA=false 2026-03-03 00:23:12.587411 | orchestrator | ++ ARA=false 2026-03-03 00:23:12.587417 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-03 00:23:12.587424 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-03 00:23:12.587438 | orchestrator | ++ export TEMPEST=true 2026-03-03 00:23:12.587444 | orchestrator | ++ TEMPEST=true 2026-03-03 00:23:12.587450 | orchestrator | ++ export IS_ZUUL=true 2026-03-03 00:23:12.587456 | orchestrator | ++ IS_ZUUL=true 2026-03-03 00:23:12.587463 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:23:12.587469 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:23:12.587475 | orchestrator | ++ export EXTERNAL_API=false 2026-03-03 00:23:12.587482 | orchestrator | ++ EXTERNAL_API=false 2026-03-03 00:23:12.587488 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-03 00:23:12.587494 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-03 00:23:12.587500 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-03 00:23:12.587506 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-03 00:23:12.587513 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-03 00:23:12.587519 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-03 00:23:12.587525 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-03 00:23:12.587531 | orchestrator | ++ export INTERACTIVE=false 2026-03-03 00:23:12.587537 | orchestrator | ++ INTERACTIVE=false 2026-03-03 00:23:12.587543 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-03 00:23:12.587553 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-03 00:23:12.587568 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-03 00:23:12.587575 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-03 00:23:12.587581 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-03 00:23:12.594464 | orchestrator | + set -e 2026-03-03 00:23:12.594982 | orchestrator | + VERSION=reef 2026-03-03 00:23:12.595619 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-03 00:23:12.602382 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-03 00:23:12.602441 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-03 00:23:12.607367 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-03 00:23:12.612168 | orchestrator | + set -e 2026-03-03 00:23:12.612205 | orchestrator | + VERSION=2024.2 2026-03-03 00:23:12.612538 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-03 00:23:12.616430 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-03 00:23:12.616463 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-03 00:23:12.621936 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-03 00:23:12.621990 | orchestrator | ++ semver latest 7.0.0 2026-03-03 00:23:12.677557 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-03 00:23:12.677668 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-03 00:23:12.677683 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-03 00:23:12.677725 | orchestrator | ++ semver latest 10.0.0-0 2026-03-03 00:23:12.736490 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-03 00:23:12.736603 | orchestrator | ++ semver 2024.2 2025.1 2026-03-03 00:23:12.795808 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-03 00:23:12.795922 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-03 00:23:12.883388 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-03 00:23:12.884570 | orchestrator | + source /opt/venv/bin/activate 2026-03-03 00:23:12.885456 | orchestrator | ++ deactivate nondestructive 2026-03-03 00:23:12.885472 | orchestrator | ++ '[' -n '' ']' 2026-03-03 00:23:12.885486 | orchestrator | ++ '[' -n '' ']' 2026-03-03 00:23:12.885499 | orchestrator | ++ hash -r 2026-03-03 00:23:12.885516 | orchestrator | ++ '[' -n '' ']' 2026-03-03 00:23:12.886521 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-03 00:23:12.886623 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-03 00:23:12.886642 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-03 00:23:12.886654 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-03 00:23:12.886666 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-03 00:23:12.886677 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-03 00:23:12.886688 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-03 00:23:12.886699 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-03 00:23:12.886711 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-03 00:23:12.886722 | orchestrator | ++ export PATH 2026-03-03 00:23:12.886733 | orchestrator | ++ '[' -n '' ']' 2026-03-03 00:23:12.886744 | orchestrator | ++ '[' -z '' ']' 2026-03-03 00:23:12.886755 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-03 00:23:12.886765 | orchestrator | ++ PS1='(venv) ' 2026-03-03 00:23:12.886776 | orchestrator | ++ export PS1 2026-03-03 00:23:12.886787 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-03 00:23:12.886799 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-03 00:23:12.886810 | orchestrator | ++ hash -r 2026-03-03 00:23:12.886842 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-03 00:23:13.952397 | orchestrator | 2026-03-03 00:23:13.952516 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-03 00:23:13.952532 | orchestrator | 2026-03-03 00:23:13.952544 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-03 00:23:14.444830 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:14.444928 | orchestrator | 2026-03-03 00:23:14.444939 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-03 00:23:15.276767 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:15.276854 | orchestrator | 2026-03-03 00:23:15.276864 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-03 00:23:15.276872 | orchestrator | 2026-03-03 00:23:15.276880 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:23:17.290118 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:17.290230 | orchestrator | 2026-03-03 00:23:17.290248 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-03 00:23:17.342733 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:17.342843 | orchestrator | 2026-03-03 00:23:17.342863 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-03 00:23:17.746912 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:17.747028 | orchestrator | 2026-03-03 00:23:17.747043 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-03 00:23:17.789416 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:17.789532 | orchestrator | 2026-03-03 00:23:17.789549 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-03 00:23:18.097103 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:18.097215 | orchestrator | 2026-03-03 00:23:18.097232 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-03 00:23:18.400796 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:18.400927 | orchestrator | 2026-03-03 00:23:18.400943 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-03 00:23:18.496715 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:18.496807 | orchestrator | 2026-03-03 00:23:18.496822 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-03 00:23:18.496835 | orchestrator | 2026-03-03 00:23:18.496845 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:23:20.042211 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:20.043167 | orchestrator | 2026-03-03 00:23:20.043200 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-03 00:23:20.126969 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-03 00:23:20.127076 | orchestrator | 2026-03-03 00:23:20.127093 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-03 00:23:20.172182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-03 00:23:20.172333 | orchestrator | 2026-03-03 00:23:20.172351 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-03 00:23:21.173572 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-03 00:23:21.173678 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-03 00:23:21.173692 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-03 00:23:21.173704 | orchestrator | 2026-03-03 00:23:21.173717 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-03 00:23:22.945589 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-03 00:23:22.945718 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-03 00:23:22.945744 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-03 00:23:22.945764 | orchestrator | 2026-03-03 00:23:22.945787 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-03 00:23:23.558890 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-03 00:23:23.559001 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:23.559017 | orchestrator | 2026-03-03 00:23:23.559030 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-03 00:23:24.179149 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-03 00:23:24.179261 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:24.179277 | orchestrator | 2026-03-03 00:23:24.179345 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-03 00:23:24.235497 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:24.235579 | orchestrator | 2026-03-03 00:23:24.235589 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-03 00:23:24.583980 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:24.584088 | orchestrator | 2026-03-03 00:23:24.584105 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-03 00:23:24.655614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-03 00:23:24.655704 | orchestrator | 2026-03-03 00:23:24.655715 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-03 00:23:25.700221 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:25.700360 | orchestrator | 2026-03-03 00:23:25.700377 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-03 00:23:26.471379 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:26.471512 | orchestrator | 2026-03-03 00:23:26.471542 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-03 00:23:40.866407 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:40.866487 | orchestrator | 2026-03-03 00:23:40.866514 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-03 00:23:40.935449 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:40.935557 | orchestrator | 2026-03-03 00:23:40.935574 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-03 00:23:40.935587 | orchestrator | 2026-03-03 00:23:40.935599 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:23:42.603097 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:42.603197 | orchestrator | 2026-03-03 00:23:42.603241 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-03 00:23:42.710992 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-03 00:23:42.711092 | orchestrator | 2026-03-03 00:23:42.711107 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-03 00:23:42.757521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-03 00:23:42.757612 | orchestrator | 2026-03-03 00:23:42.757626 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-03 00:23:44.764572 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:44.764682 | orchestrator | 2026-03-03 00:23:44.764699 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-03 00:23:44.819512 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:44.819617 | orchestrator | 2026-03-03 00:23:44.819633 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-03 00:23:44.929978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-03 00:23:44.930167 | orchestrator | 2026-03-03 00:23:44.930187 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-03 00:23:47.457548 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-03 00:23:47.457659 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-03 00:23:47.457679 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-03 00:23:47.457693 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-03 00:23:47.457707 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-03 00:23:47.457721 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-03 00:23:47.457735 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-03 00:23:47.457749 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-03 00:23:47.457764 | orchestrator | 2026-03-03 00:23:47.457778 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-03 00:23:48.017359 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:48.017468 | orchestrator | 2026-03-03 00:23:48.017485 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-03 00:23:48.611950 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:48.612074 | orchestrator | 2026-03-03 00:23:48.612097 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-03 00:23:48.693576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-03 00:23:48.693681 | orchestrator | 2026-03-03 00:23:48.693696 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-03 00:23:49.765957 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-03 00:23:49.766113 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-03 00:23:49.766132 | orchestrator | 2026-03-03 00:23:49.766145 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-03 00:23:50.371195 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:50.371356 | orchestrator | 2026-03-03 00:23:50.371374 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-03 00:23:50.420047 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:50.420175 | orchestrator | 2026-03-03 00:23:50.420191 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-03 00:23:50.496861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-03 00:23:50.496963 | orchestrator | 2026-03-03 00:23:50.496979 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-03 00:23:51.102762 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:51.102872 | orchestrator | 2026-03-03 00:23:51.102888 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-03 00:23:51.157799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-03 00:23:51.157941 | orchestrator | 2026-03-03 00:23:51.157959 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-03 00:23:52.498847 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-03 00:23:52.498979 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-03 00:23:52.499029 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:52.499067 | orchestrator | 2026-03-03 00:23:52.499089 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-03 00:23:53.129259 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:53.129401 | orchestrator | 2026-03-03 00:23:53.129419 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-03 00:23:53.185104 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:53.185199 | orchestrator | 2026-03-03 00:23:53.185213 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-03 00:23:53.277621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-03 00:23:53.277722 | orchestrator | 2026-03-03 00:23:53.277737 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-03 00:23:53.782734 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:53.782829 | orchestrator | 2026-03-03 00:23:53.782867 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-03 00:23:54.243967 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:54.244077 | orchestrator | 2026-03-03 00:23:54.244093 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-03 00:23:55.459390 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-03 00:23:55.459517 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-03 00:23:55.459531 | orchestrator | 2026-03-03 00:23:55.459544 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-03 00:23:56.072723 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:56.072864 | orchestrator | 2026-03-03 00:23:56.072882 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-03 00:23:56.430656 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:56.430791 | orchestrator | 2026-03-03 00:23:56.430820 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-03 00:23:56.787390 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:56.787501 | orchestrator | 2026-03-03 00:23:56.787519 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-03 00:23:56.840687 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:23:56.840789 | orchestrator | 2026-03-03 00:23:56.840805 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-03 00:23:56.920502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-03 00:23:56.920619 | orchestrator | 2026-03-03 00:23:56.920649 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-03 00:23:56.955992 | orchestrator | ok: [testbed-manager] 2026-03-03 00:23:56.956091 | orchestrator | 2026-03-03 00:23:56.956105 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-03 00:23:59.021705 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-03 00:23:59.021810 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-03 00:23:59.021825 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-03 00:23:59.021835 | orchestrator | 2026-03-03 00:23:59.021846 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-03 00:23:59.712902 | orchestrator | changed: [testbed-manager] 2026-03-03 00:23:59.713007 | orchestrator | 2026-03-03 00:23:59.713023 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-03 00:24:00.426673 | orchestrator | changed: [testbed-manager] 2026-03-03 00:24:00.426782 | orchestrator | 2026-03-03 00:24:00.426799 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-03 00:24:01.152345 | orchestrator | changed: [testbed-manager] 2026-03-03 00:24:01.152477 | orchestrator | 2026-03-03 00:24:01.152512 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-03 00:24:01.218217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-03 00:24:01.218379 | orchestrator | 2026-03-03 00:24:01.218398 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-03 00:24:01.257830 | orchestrator | ok: [testbed-manager] 2026-03-03 00:24:01.257943 | orchestrator | 2026-03-03 00:24:01.257965 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-03 00:24:01.932632 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-03 00:24:01.932740 | orchestrator | 2026-03-03 00:24:01.932758 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-03 00:24:02.018542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-03 00:24:02.018644 | orchestrator | 2026-03-03 00:24:02.018660 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-03 00:24:02.718506 | orchestrator | changed: [testbed-manager] 2026-03-03 00:24:02.718615 | orchestrator | 2026-03-03 00:24:02.718631 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-03 00:24:03.309219 | orchestrator | ok: [testbed-manager] 2026-03-03 00:24:03.309354 | orchestrator | 2026-03-03 00:24:03.309369 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-03 00:24:03.367735 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:24:03.367831 | orchestrator | 2026-03-03 00:24:03.367846 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-03 00:24:03.422591 | orchestrator | ok: [testbed-manager] 2026-03-03 00:24:03.422687 | orchestrator | 2026-03-03 00:24:03.422702 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-03 00:24:04.247072 | orchestrator | changed: [testbed-manager] 2026-03-03 00:24:04.247179 | orchestrator | 2026-03-03 00:24:04.247197 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-03 00:25:09.422414 | orchestrator | changed: [testbed-manager] 2026-03-03 00:25:09.422535 | orchestrator | 2026-03-03 00:25:09.422552 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-03 00:25:10.385069 | orchestrator | ok: [testbed-manager] 2026-03-03 00:25:10.385159 | orchestrator | 2026-03-03 00:25:10.385172 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-03 00:25:10.444351 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:25:10.444450 | orchestrator | 2026-03-03 00:25:10.444464 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-03 00:25:12.855066 | orchestrator | changed: [testbed-manager] 2026-03-03 00:25:12.855172 | orchestrator | 2026-03-03 00:25:12.855189 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-03 00:25:12.956175 | orchestrator | ok: [testbed-manager] 2026-03-03 00:25:12.956272 | orchestrator | 2026-03-03 00:25:12.956352 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-03 00:25:12.956367 | orchestrator | 2026-03-03 00:25:12.956379 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-03 00:25:13.011902 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:25:13.011998 | orchestrator | 2026-03-03 00:25:13.012012 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-03 00:26:13.074373 | orchestrator | Pausing for 60 seconds 2026-03-03 00:26:13.074532 | orchestrator | changed: [testbed-manager] 2026-03-03 00:26:13.074563 | orchestrator | 2026-03-03 00:26:13.074584 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-03 00:26:16.257089 | orchestrator | changed: [testbed-manager] 2026-03-03 00:26:16.257215 | orchestrator | 2026-03-03 00:26:16.257231 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-03 00:27:18.433958 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-03 00:27:18.434150 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-03 00:27:18.434172 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-03 00:27:18.434214 | orchestrator | changed: [testbed-manager] 2026-03-03 00:27:18.434231 | orchestrator | 2026-03-03 00:27:18.434243 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-03 00:27:28.811170 | orchestrator | changed: [testbed-manager] 2026-03-03 00:27:28.811288 | orchestrator | 2026-03-03 00:27:28.811361 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-03 00:27:28.891528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-03 00:27:28.891631 | orchestrator | 2026-03-03 00:27:28.891647 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-03 00:27:28.891659 | orchestrator | 2026-03-03 00:27:28.891671 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-03 00:27:28.936079 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:27:28.936191 | orchestrator | 2026-03-03 00:27:28.936208 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-03 00:27:29.009326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-03 00:27:29.009431 | orchestrator | 2026-03-03 00:27:29.009447 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-03 00:27:29.796487 | orchestrator | changed: [testbed-manager] 2026-03-03 00:27:29.796600 | orchestrator | 2026-03-03 00:27:29.796617 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-03 00:27:33.081906 | orchestrator | ok: [testbed-manager] 2026-03-03 00:27:33.082101 | orchestrator | 2026-03-03 00:27:33.082123 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-03 00:27:33.156584 | orchestrator | ok: [testbed-manager] => { 2026-03-03 00:27:33.156694 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-03 00:27:33.156710 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-03 00:27:33.156725 | orchestrator | "Checking running containers against expected versions...", 2026-03-03 00:27:33.156738 | orchestrator | "", 2026-03-03 00:27:33.156749 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-03 00:27:33.156761 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-03 00:27:33.156772 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.156783 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-03 00:27:33.156794 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.156805 | orchestrator | "", 2026-03-03 00:27:33.156816 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-03 00:27:33.156828 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-03 00:27:33.156867 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.156878 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-03 00:27:33.156889 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.156900 | orchestrator | "", 2026-03-03 00:27:33.156911 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-03 00:27:33.156921 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-03 00:27:33.156932 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.156943 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-03 00:27:33.156954 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.156965 | orchestrator | "", 2026-03-03 00:27:33.156976 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-03 00:27:33.156988 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-03 00:27:33.156999 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157010 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-03 00:27:33.157021 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157032 | orchestrator | "", 2026-03-03 00:27:33.157043 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-03 00:27:33.157080 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-03 00:27:33.157091 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157103 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-03 00:27:33.157116 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157129 | orchestrator | "", 2026-03-03 00:27:33.157141 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-03 00:27:33.157154 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157167 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157179 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157192 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157204 | orchestrator | "", 2026-03-03 00:27:33.157217 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-03 00:27:33.157230 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-03 00:27:33.157241 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157252 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-03 00:27:33.157263 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157274 | orchestrator | "", 2026-03-03 00:27:33.157389 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-03 00:27:33.157422 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-03 00:27:33.157434 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157445 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-03 00:27:33.157467 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157478 | orchestrator | "", 2026-03-03 00:27:33.157495 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-03 00:27:33.157517 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-03 00:27:33.157529 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157541 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-03 00:27:33.157552 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157563 | orchestrator | "", 2026-03-03 00:27:33.157574 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-03 00:27:33.157585 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-03 00:27:33.157596 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157607 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-03 00:27:33.157618 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157629 | orchestrator | "", 2026-03-03 00:27:33.157640 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-03 00:27:33.157651 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157662 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157673 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157684 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157695 | orchestrator | "", 2026-03-03 00:27:33.157706 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-03 00:27:33.157717 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157728 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157739 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157750 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157813 | orchestrator | "", 2026-03-03 00:27:33.157825 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-03 00:27:33.157836 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157848 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157859 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157870 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157881 | orchestrator | "", 2026-03-03 00:27:33.157892 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-03 00:27:33.157903 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157914 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.157936 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.157947 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.157958 | orchestrator | "", 2026-03-03 00:27:33.157969 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-03 00:27:33.158002 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.158105 | orchestrator | " Enabled: true", 2026-03-03 00:27:33.158119 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-03 00:27:33.158179 | orchestrator | " Status: ✅ MATCH", 2026-03-03 00:27:33.158213 | orchestrator | "", 2026-03-03 00:27:33.158229 | orchestrator | "=== Summary ===", 2026-03-03 00:27:33.158246 | orchestrator | "Errors (version mismatches): 0", 2026-03-03 00:27:33.158262 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-03 00:27:33.158278 | orchestrator | "", 2026-03-03 00:27:33.158294 | orchestrator | "✅ All running containers match expected versions!" 2026-03-03 00:27:33.158342 | orchestrator | ] 2026-03-03 00:27:33.158361 | orchestrator | } 2026-03-03 00:27:33.158378 | orchestrator | 2026-03-03 00:27:33.158427 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-03 00:27:33.217560 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:27:33.217655 | orchestrator | 2026-03-03 00:27:33.217666 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:27:33.217675 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-03 00:27:33.217683 | orchestrator | 2026-03-03 00:27:33.318086 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-03 00:27:33.318194 | orchestrator | + deactivate 2026-03-03 00:27:33.318217 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-03 00:27:33.318232 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-03 00:27:33.318243 | orchestrator | + export PATH 2026-03-03 00:27:33.318254 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-03 00:27:33.318265 | orchestrator | + '[' -n '' ']' 2026-03-03 00:27:33.318275 | orchestrator | + hash -r 2026-03-03 00:27:33.318286 | orchestrator | + '[' -n '' ']' 2026-03-03 00:27:33.318298 | orchestrator | + unset VIRTUAL_ENV 2026-03-03 00:27:33.318399 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-03 00:27:33.318410 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-03 00:27:33.318422 | orchestrator | + unset -f deactivate 2026-03-03 00:27:33.318433 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-03 00:27:33.326264 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-03 00:27:33.326382 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-03 00:27:33.326393 | orchestrator | + local max_attempts=60 2026-03-03 00:27:33.326402 | orchestrator | + local name=ceph-ansible 2026-03-03 00:27:33.326409 | orchestrator | + local attempt_num=1 2026-03-03 00:27:33.326589 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:27:33.355759 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:27:33.355851 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-03 00:27:33.355867 | orchestrator | + local max_attempts=60 2026-03-03 00:27:33.355880 | orchestrator | + local name=kolla-ansible 2026-03-03 00:27:33.355893 | orchestrator | + local attempt_num=1 2026-03-03 00:27:33.355906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-03 00:27:33.388187 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:27:33.388289 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-03 00:27:33.388327 | orchestrator | + local max_attempts=60 2026-03-03 00:27:33.388338 | orchestrator | + local name=osism-ansible 2026-03-03 00:27:33.388346 | orchestrator | + local attempt_num=1 2026-03-03 00:27:33.388356 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-03 00:27:33.427560 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:27:33.427657 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-03 00:27:33.427672 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-03 00:27:34.099701 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-03 00:27:34.273471 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-03 00:27:34.273616 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-03 00:27:34.273635 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-03 00:27:34.273649 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-03 00:27:34.273664 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-03 00:27:34.273678 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-03 00:27:34.273691 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-03 00:27:34.273704 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-03 00:27:34.273736 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-03 00:27:34.273750 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-03 00:27:34.273764 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-03 00:27:34.273778 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-03 00:27:34.273792 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-03 00:27:34.273806 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-03 00:27:34.273814 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-03 00:27:34.273822 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-03 00:27:34.278703 | orchestrator | ++ semver latest 7.0.0 2026-03-03 00:27:34.322385 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-03 00:27:34.322500 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-03 00:27:34.322520 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-03 00:27:34.325228 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-03 00:27:46.427244 | orchestrator | 2026-03-03 00:27:46 | INFO  | Prepare task for execution of resolvconf. 2026-03-03 00:27:46.619059 | orchestrator | 2026-03-03 00:27:46 | INFO  | Task d6ba5121-1a01-4be1-9c1f-45adbfbf3b66 (resolvconf) was prepared for execution. 2026-03-03 00:27:46.619182 | orchestrator | 2026-03-03 00:27:46 | INFO  | It takes a moment until task d6ba5121-1a01-4be1-9c1f-45adbfbf3b66 (resolvconf) has been started and output is visible here. 2026-03-03 00:27:59.465351 | orchestrator | 2026-03-03 00:27:59.465461 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-03 00:27:59.465470 | orchestrator | 2026-03-03 00:27:59.465475 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:27:59.465481 | orchestrator | Tuesday 03 March 2026 00:27:50 +0000 (0:00:00.103) 0:00:00.103 ********* 2026-03-03 00:27:59.465486 | orchestrator | ok: [testbed-manager] 2026-03-03 00:27:59.465492 | orchestrator | 2026-03-03 00:27:59.465498 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-03 00:27:59.465503 | orchestrator | Tuesday 03 March 2026 00:27:53 +0000 (0:00:03.340) 0:00:03.444 ********* 2026-03-03 00:27:59.465508 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:27:59.465514 | orchestrator | 2026-03-03 00:27:59.465519 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-03 00:27:59.465524 | orchestrator | Tuesday 03 March 2026 00:27:53 +0000 (0:00:00.052) 0:00:03.497 ********* 2026-03-03 00:27:59.465532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-03 00:27:59.465540 | orchestrator | 2026-03-03 00:27:59.465545 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-03 00:27:59.465550 | orchestrator | Tuesday 03 March 2026 00:27:53 +0000 (0:00:00.073) 0:00:03.570 ********* 2026-03-03 00:27:59.465555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-03 00:27:59.465560 | orchestrator | 2026-03-03 00:27:59.465572 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-03 00:27:59.465577 | orchestrator | Tuesday 03 March 2026 00:27:53 +0000 (0:00:00.057) 0:00:03.627 ********* 2026-03-03 00:27:59.465582 | orchestrator | ok: [testbed-manager] 2026-03-03 00:27:59.465586 | orchestrator | 2026-03-03 00:27:59.465591 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-03 00:27:59.465596 | orchestrator | Tuesday 03 March 2026 00:27:54 +0000 (0:00:01.051) 0:00:04.679 ********* 2026-03-03 00:27:59.465601 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:27:59.465605 | orchestrator | 2026-03-03 00:27:59.465610 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-03 00:27:59.465615 | orchestrator | Tuesday 03 March 2026 00:27:54 +0000 (0:00:00.065) 0:00:04.745 ********* 2026-03-03 00:27:59.465619 | orchestrator | ok: [testbed-manager] 2026-03-03 00:27:59.465624 | orchestrator | 2026-03-03 00:27:59.465629 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-03 00:27:59.465634 | orchestrator | Tuesday 03 March 2026 00:27:55 +0000 (0:00:00.510) 0:00:05.255 ********* 2026-03-03 00:27:59.465638 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:27:59.465643 | orchestrator | 2026-03-03 00:27:59.465648 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-03 00:27:59.465654 | orchestrator | Tuesday 03 March 2026 00:27:55 +0000 (0:00:00.086) 0:00:05.342 ********* 2026-03-03 00:27:59.465659 | orchestrator | changed: [testbed-manager] 2026-03-03 00:27:59.465664 | orchestrator | 2026-03-03 00:27:59.465668 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-03 00:27:59.465673 | orchestrator | Tuesday 03 March 2026 00:27:56 +0000 (0:00:00.540) 0:00:05.882 ********* 2026-03-03 00:27:59.465678 | orchestrator | changed: [testbed-manager] 2026-03-03 00:27:59.465682 | orchestrator | 2026-03-03 00:27:59.465687 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-03 00:27:59.465691 | orchestrator | Tuesday 03 March 2026 00:27:57 +0000 (0:00:01.041) 0:00:06.923 ********* 2026-03-03 00:27:59.465696 | orchestrator | ok: [testbed-manager] 2026-03-03 00:27:59.465716 | orchestrator | 2026-03-03 00:27:59.465721 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-03 00:27:59.465726 | orchestrator | Tuesday 03 March 2026 00:27:58 +0000 (0:00:00.966) 0:00:07.890 ********* 2026-03-03 00:27:59.465730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-03 00:27:59.465735 | orchestrator | 2026-03-03 00:27:59.465739 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-03 00:27:59.465744 | orchestrator | Tuesday 03 March 2026 00:27:58 +0000 (0:00:00.095) 0:00:07.985 ********* 2026-03-03 00:27:59.465748 | orchestrator | changed: [testbed-manager] 2026-03-03 00:27:59.465753 | orchestrator | 2026-03-03 00:27:59.465758 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:27:59.465763 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 00:27:59.465768 | orchestrator | 2026-03-03 00:27:59.465773 | orchestrator | 2026-03-03 00:27:59.465777 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:27:59.465782 | orchestrator | Tuesday 03 March 2026 00:27:59 +0000 (0:00:01.099) 0:00:09.085 ********* 2026-03-03 00:27:59.465786 | orchestrator | =============================================================================== 2026-03-03 00:27:59.465791 | orchestrator | Gathering Facts --------------------------------------------------------- 3.34s 2026-03-03 00:27:59.465795 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2026-03-03 00:27:59.465800 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2026-03-03 00:27:59.465804 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-03-03 00:27:59.465809 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2026-03-03 00:27:59.465813 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-03-03 00:27:59.465830 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-03-03 00:27:59.465835 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-03-03 00:27:59.465840 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-03 00:27:59.465844 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-03 00:27:59.465849 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-03 00:27:59.465855 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-03-03 00:27:59.465860 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-03 00:27:59.739629 | orchestrator | + osism apply sshconfig 2026-03-03 00:28:11.890906 | orchestrator | 2026-03-03 00:28:11 | INFO  | Prepare task for execution of sshconfig. 2026-03-03 00:28:11.960747 | orchestrator | 2026-03-03 00:28:11 | INFO  | Task 4bf3b033-3044-4c24-9a2f-128fd570d923 (sshconfig) was prepared for execution. 2026-03-03 00:28:11.960874 | orchestrator | 2026-03-03 00:28:11 | INFO  | It takes a moment until task 4bf3b033-3044-4c24-9a2f-128fd570d923 (sshconfig) has been started and output is visible here. 2026-03-03 00:28:22.715427 | orchestrator | 2026-03-03 00:28:22.715541 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-03 00:28:22.715559 | orchestrator | 2026-03-03 00:28:22.715571 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-03 00:28:22.715583 | orchestrator | Tuesday 03 March 2026 00:28:15 +0000 (0:00:00.143) 0:00:00.143 ********* 2026-03-03 00:28:22.715594 | orchestrator | ok: [testbed-manager] 2026-03-03 00:28:22.715607 | orchestrator | 2026-03-03 00:28:22.715618 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-03 00:28:22.715658 | orchestrator | Tuesday 03 March 2026 00:28:16 +0000 (0:00:00.476) 0:00:00.619 ********* 2026-03-03 00:28:22.715670 | orchestrator | changed: [testbed-manager] 2026-03-03 00:28:22.715681 | orchestrator | 2026-03-03 00:28:22.715697 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-03 00:28:22.715716 | orchestrator | Tuesday 03 March 2026 00:28:16 +0000 (0:00:00.494) 0:00:01.114 ********* 2026-03-03 00:28:22.715734 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-03 00:28:22.715753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-03 00:28:22.715771 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-03 00:28:22.715789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-03 00:28:22.715808 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-03 00:28:22.715829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-03 00:28:22.715848 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-03 00:28:22.715868 | orchestrator | 2026-03-03 00:28:22.715887 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-03 00:28:22.715907 | orchestrator | Tuesday 03 March 2026 00:28:21 +0000 (0:00:05.156) 0:00:06.270 ********* 2026-03-03 00:28:22.715920 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:28:22.715932 | orchestrator | 2026-03-03 00:28:22.715945 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-03 00:28:22.715959 | orchestrator | Tuesday 03 March 2026 00:28:22 +0000 (0:00:00.086) 0:00:06.357 ********* 2026-03-03 00:28:22.715972 | orchestrator | changed: [testbed-manager] 2026-03-03 00:28:22.715984 | orchestrator | 2026-03-03 00:28:22.715998 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:28:22.716012 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:28:22.716025 | orchestrator | 2026-03-03 00:28:22.716036 | orchestrator | 2026-03-03 00:28:22.716047 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:28:22.716058 | orchestrator | Tuesday 03 March 2026 00:28:22 +0000 (0:00:00.491) 0:00:06.849 ********* 2026-03-03 00:28:22.716069 | orchestrator | =============================================================================== 2026-03-03 00:28:22.716080 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.16s 2026-03-03 00:28:22.716091 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-03-03 00:28:22.716102 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.49s 2026-03-03 00:28:22.716113 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2026-03-03 00:28:22.716124 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-03-03 00:28:22.903643 | orchestrator | + osism apply known-hosts 2026-03-03 00:28:34.752440 | orchestrator | 2026-03-03 00:28:34 | INFO  | Prepare task for execution of known-hosts. 2026-03-03 00:28:34.823669 | orchestrator | 2026-03-03 00:28:34 | INFO  | Task 47f2b69f-c08f-44ea-ab50-105f191b1ff2 (known-hosts) was prepared for execution. 2026-03-03 00:28:34.823778 | orchestrator | 2026-03-03 00:28:34 | INFO  | It takes a moment until task 47f2b69f-c08f-44ea-ab50-105f191b1ff2 (known-hosts) has been started and output is visible here. 2026-03-03 00:28:50.720073 | orchestrator | 2026-03-03 00:28:50.720173 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-03 00:28:50.720189 | orchestrator | 2026-03-03 00:28:50.720201 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-03 00:28:50.720213 | orchestrator | Tuesday 03 March 2026 00:28:38 +0000 (0:00:00.166) 0:00:00.166 ********* 2026-03-03 00:28:50.720225 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-03 00:28:50.720237 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-03 00:28:50.720270 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-03 00:28:50.720282 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-03 00:28:50.720293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-03 00:28:50.720335 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-03 00:28:50.720356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-03 00:28:50.720368 | orchestrator | 2026-03-03 00:28:50.720380 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-03 00:28:50.720392 | orchestrator | Tuesday 03 March 2026 00:28:44 +0000 (0:00:05.805) 0:00:05.971 ********* 2026-03-03 00:28:50.720415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-03 00:28:50.720429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-03 00:28:50.720440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-03 00:28:50.720451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-03 00:28:50.720462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-03 00:28:50.720473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-03 00:28:50.720483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-03 00:28:50.720494 | orchestrator | 2026-03-03 00:28:50.720505 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:28:50.720516 | orchestrator | Tuesday 03 March 2026 00:28:44 +0000 (0:00:00.156) 0:00:06.127 ********* 2026-03-03 00:28:50.720528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFo6M8s5xmsWoBtPK/57vkKG2Ri6X/Vg8Bu5NEyG2/MLdR+OJelqKTgFl3u/mQWYxZIwKumc/2b2657Tk9oRaAM=) 2026-03-03 00:28:50.720544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI5TQ6YB+Y0FBz7dJ3svT0Wcj+u2qhjm6pvTg66sVfeY5xAb2gnJkb5RYg7i7AMYye4419so9/pgSUG3TuePgUcM698gOteiPPZ4g93glPjX70naU3fMsPVYYJxgWfPxy4Fe5LbEx4qIUdFNIH4jal8WcV70XDCOIuMr6C0R1erZ+/4t3M5VRjH902B2icUKVDIgVYBMsgNHYhBo62no3yUGxBMpDDqVjcunf4JUKzNZ1/5nFdvTnVeIMV1fIcEIMFxDQ74FQeGsNbymvTeI6cb7OtiW6+shGNbVEAse+fClaMuB2RgZrjr/ULBLJY6htKMRaXWUdIyUkFef8aQCRMPiZ9716A3ZMceGHiGITwRe/x3ijVJnXIQYrfBMXAfFJNLRXXETnJO2z7Zntg1B8iJo4yIAZW4NxcE4urtjDbDWoIeBhbiT/UeERk3gRCfQ3ddJYrZgOyMXL1XZ8wkHfukjwU10scCdCWm3dQFlM9GaYYISV1qtfBMz1tMQ6Z23U=) 2026-03-03 00:28:50.720562 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8WxaaD0C3YnupQrJNyE1RJqP9BPqAK5mDC5QTeYzzm) 2026-03-03 00:28:50.720576 | orchestrator | 2026-03-03 00:28:50.720589 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:28:50.720602 | orchestrator | Tuesday 03 March 2026 00:28:46 +0000 (0:00:01.197) 0:00:07.325 ********* 2026-03-03 00:28:50.720615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVX0HZPh7LoWqdx2IFI8IggdnjTo9WntFHPBYJYibJvDOfRxcXfXCM2qfpLhp4HDLM+M642n49cUFbbV8Sijzc=) 2026-03-03 00:28:50.720677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkUw7CsTU+IAJBlVdhyBsu+Ole9+GTX67xc3SaZ9mqkW9vRNVXo1oagumbbC9VmZcIUCv2HX9ymAKgy4vuC/HWEgy2HNy6breEdg0GcJ1TwuV49ICLNok202lpdH/AVFkNuaptlS1Obqa+/WiygQbFB2OQmBH4nOgYgrMXydfk3DpN4qopizVVPxVf84VZZeyw2HSpVlBonxo04Kyz9Ar5iLq/z36feYen1H1i/80cO8dLM7wl8y6+cZtM0X6lgPqnhxnlQgOtBrE9mLlj+htJzgUJK21cnuVSZcJp+imgN+uep9nfPuyWCTPxdwqbmX+TcT+jiP6wg75/dYFGgGgqOid1R1/VSPEFs9DZ4zsckvCx0czgj8wSu6xRf6A9QRGEazjZ2vGwuGBBb51996P82zorSCglLyKOmhgO5Y1V3AGD+KfKOAp2ra342WWXtqFGxiHTQGF+A+2SugtcWAHQYe1U4QdPmSn1a2TPcesODv6iei4Qt2uxdVHn5CluuO8=) 2026-03-03 00:28:50.720700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGzKyth0mjstfxfzbSlHSRPH+XBvxSFjk1DIbxnjxPRN) 2026-03-03 00:28:50.720718 | orchestrator | 2026-03-03 00:28:50.720737 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:28:50.720756 | orchestrator | Tuesday 03 March 2026 00:28:47 +0000 (0:00:01.048) 0:00:08.374 ********* 2026-03-03 00:28:50.720776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCkf3EOHEP2RFxKO4UegGc4B5N86F9eJPxbbRxdm8OILTijoGcjPA1dH3XNlFCoTn7FYGEPNVvYw6oJ0XIH5BK8=) 2026-03-03 00:28:50.720797 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD3KsaDJJXHPR9VZQLckjijwAxHXhFAZCLCJNoTlJ3LZaZcoVr8UkQIZd8tTsp092mHgt0xy5k2Wq8q3z5h2ioSKJujoyWI03bFSxvuLfGrqEDLM+DlBinjude1iVT3IoNIe9rewL/B2ieMUJENjSsNrd1n29RSgx199WSx0ISfMpRSHgVb6SlRL7mRIZpewzh/ckY1tg+XXWxN0826TDHZ6eXnufnuZ65jZ4xTnZj1vYO9ZK3TwZPpK5SHwYDwgXSNrc4jEOBJ5kOqvZUDczkkJ+XyxBAmH/I7D/b3BsDVOoU3yz+aEOwWxCtqthB8M4b5lVq7w9RNI/z3X0Ho0Shc+1vtk7SWDu7PTmlxo2Bs/v8VURbrwS2izJS1frv9APDqDBaXxyF0jSRerQZv+BizqYabk0p1GXTkw0EqfWdQovPW7m7B9okz4v5ZbCJWOrco68um2E/k0L+YWgAbvncj0oUNaNCjbs0NLw8iyBhS1MJ1JtcyMP2tM6nL7WaMgWc=) 2026-03-03 00:28:50.720906 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIkG8CF1PyUYe3FIG/N3nFZLjfH/PV374RFnVAfAmpyv) 2026-03-03 00:28:50.720929 | orchestrator | 2026-03-03 00:28:50.720944 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:28:50.720962 | orchestrator | Tuesday 03 March 2026 00:28:48 +0000 (0:00:01.077) 0:00:09.452 ********* 2026-03-03 00:28:50.720987 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPKBC2Ax655TlsEnMeO4cMvQfk2dzpNBP8mfpPHvqfYKeVikwmrTfwYuwKxq27wJ8OEKL9W4agOk7siwZS7TJMk=) 2026-03-03 00:28:50.721006 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoZVjHoCPwjVhOQLZs4RUkYPvrHKwH87csmXHbpIbBsJnM0RBC3uvX3mimugRGfqdtntPs2lZx7RpOIbZvjYj5ebYzDFRtZdItrOz00sHfy74sA0DwRFfMCa7o/yjyXpxFm82+luxY7zCVNsuO6B/oyrRmdZzW8FpiL5HJJDuiGeDkhdYYqtEfLPVuqETBBAEsUwxihoj30e93tnRFnFTMg4Mpw3okOT8LJJrbuycN4MjONKY+ezDibASlXtxUtMASlrSY6F6/P41dyBJNgUD0YAisdFscx481Toa1xSujrpMhfI17Xo8DkJOFgbBapl5bFVCJvf1lTIcv1oXrOIm1JXZ1QDnV564iIEh7br0vwoDjHIrmQ3dK13urOMF/KfuN63NNVfvL3941MbG05A2baXYaFO5ecgQsX8tUHMREXkYqXNQDq07sU9+fehhdZPRm6k4+gfCkc1wRSUPJNsKi7TtV2LCtb6nIJKxXbslJWk6+hFMVhE6SXr7i9PBVI2M=) 2026-03-03 00:28:50.721026 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPqLatucOWljuFP26yIFZmyH8dbhzr5c5MSpPyLNBnrH) 2026-03-03 00:28:50.721045 | orchestrator | 2026-03-03 00:28:50.721065 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:28:50.721083 | orchestrator | Tuesday 03 March 2026 00:28:49 +0000 (0:00:01.028) 0:00:10.480 ********* 2026-03-03 00:28:50.721100 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6QKYJ94I0icNYODjYKqZRGPHg4gzMtnBpPh1b5Clpy/ZzYDwjm9RZEXk0ZyfFlbe9ZrDtQyYEJtkRse0yR8WcQkFa9UaaPp7ji/0vglfb7JI4p14o+d5ykWvuuhlISQe0BmLxo48GZ9fE9hh1ro7dewxnyeTN1wPhB/O8UJ0CZPTVIX8rgUdHXsWWQaG8ptWNt92v6E1bvpNdMPNjixNgsKXRFpWvaoZxKEq9uw1nVqNm3O78P+ZRHGq4dHfeDP0B9QVK6u4t3He1UDrScGb3ECwwe6bb6+gTxGm4ULhnaL2yzvUYdzcQP4LLKj5Z4uzjDYnr08tYaVPJoAE28V6JWsRFhlun9bnaIx3ZoohKxctCWVfUo6+oDvUea9+BB6SwMsiiQW1886BSP4V7WrU5YHlLZSRBcSqP8drT9GiW+usTvonngTqcAAgdi9EbWBJWnj8CFsiWYsVO0eswg+aSkWLvBiFrRPcSVpyKP31l6XznGGazJAsBMXc92BWiLDc=) 2026-03-03 00:28:50.721136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2uFE0y4ctF6w8Ft75pSrj0uWSFcuj4VXMZ6PyihFLIqCijAaFIHbyB/1aKj83wicynd7DPVWgc9iEnZgiSYQY=) 2026-03-03 00:28:50.721154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBsOcaaAHrQbm2uUaooI4+jISDzNcCh8JGliAAHOYq/G) 2026-03-03 00:28:50.721169 | orchestrator | 2026-03-03 00:28:50.721184 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:28:50.721202 | orchestrator | Tuesday 03 March 2026 00:28:50 +0000 (0:00:01.049) 0:00:11.530 ********* 2026-03-03 00:28:50.721235 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDJirJtz+jNFSNjTlwWbm7f1ScsLR1/muQKxLBztyWlYzYXdmDACKF67wuUH/5ExrnJlBhqLX87jGfybscEYlQk=) 2026-03-03 00:29:01.684420 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINsPNxCtzICUyC83gFk+BAlMCVYbVfp+pI1yD+PIn/Fv) 2026-03-03 00:29:01.684553 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCS6qzqjPDbPY8bE7FQ5rN//sHuN6lO04j3xIhFFx4N+LF5MgL/r8xmMXCzg8rM9zM4quNEXLPDm6RtSKH1hlLcYhTGV/gJLIidDILqZimxmjJXtWrtiup3Q02OWa5K1t7HRask9+1xgS8bcQQTgCpkMMHvpC3CM5rqbaSvfUvUhW7LV6jZ/m+/Ng+LZQdd9HCyqnGsiSv4jEHs0yVRAYeTD5WqP2UiGFV1DEZp158mp4zRhJoHJuK7CkCgEddAYyadYyymSNPled937UI5rSx+88sEd/5PYd7oluyuf3GS/jOhVvsGXXktcOuN9TuiQinVNHhciWJzUFwf4Jrby+5v16NtxhX8dH6/NG95usGLJTHk/ZzQlEZ6wFb+LltWr3PTN/vIlMsEDZRF8uvdw4YqGUp0tvC0dvboZcz5FySd7ThkQWBcWvHCpQ6heT5sYJU1xMaOqVX25uhRNMChHKhVC9DeBXrHFNw4rcdk2tKqEpc2K9h1w3CfF+Gpy91qYx0=) 2026-03-03 00:29:01.684574 | orchestrator | 2026-03-03 00:29:01.684588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:01.684601 | orchestrator | Tuesday 03 March 2026 00:28:51 +0000 (0:00:01.058) 0:00:12.589 ********* 2026-03-03 00:29:01.684612 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJa+Ah8/kJJKYNESev9rWVeRC9g0bqhzC6vT1M7KDOHk51cIiZWi4qpMBO8zzxGKKcbAeL0QJwf+L7rAMvzmaio=) 2026-03-03 00:29:01.684626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqkRJ0wrLHTz/3N/6F6My4a7RDQmP21FAdWQQLu6HxVxOve0rQNKpkJ2cvkTKWpxK0w8vb9ptBsYRSDfCbCooXdcx5nzxMgUPGTd8hkiJh9dmFpn8/9oMW7xYL0YcdRvh8GldSmJnn8geBcCTapl5mGF4F0JZmc6E12CFqbTtprHzvfoAWHsDeYVPXiLOb0uDqz3r90Bo6LyL6W4t83yGYffJ1a8ebeXn869i7Dy0aS1GbYI2uf7qQkQB4A9IvFXuWaUflC7fPueLgpdn7jlTgmXNfz/A4+CokFf7Iujnbj7Gdp9ESCSDHme5PStilvhvrCpnlw5+wzz4TGEpHqat6fQrdtnamIw+Kxy9Ux7w32+MJ7BvMYmzIDLJRVGqiH6aZH2JBXmKhAYM/2z41IjrgDglzaHgfKp/YLiFwj5y/wS2ACjBarvdMCkuitUmqqfy87WHaeK5pv+es8PZyeHseTBz08UAImb0z4Had8zzEv0n3r6DlFcEQpBnzBqsYqe8=) 2026-03-03 00:29:01.684638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBPo5jdqfr6KXIhJ+WS2q02kFcdq57iGhobnq5+L4pbr) 2026-03-03 00:29:01.684649 | orchestrator | 2026-03-03 00:29:01.684661 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-03 00:29:01.684673 | orchestrator | Tuesday 03 March 2026 00:28:52 +0000 (0:00:01.066) 0:00:13.655 ********* 2026-03-03 00:29:01.684684 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-03 00:29:01.684696 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-03 00:29:01.684707 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-03 00:29:01.684718 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-03 00:29:01.684729 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-03 00:29:01.684760 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-03 00:29:01.684796 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-03 00:29:01.684808 | orchestrator | 2026-03-03 00:29:01.684819 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-03 00:29:01.684832 | orchestrator | Tuesday 03 March 2026 00:28:57 +0000 (0:00:05.211) 0:00:18.866 ********* 2026-03-03 00:29:01.684843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-03 00:29:01.684856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-03 00:29:01.684867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-03 00:29:01.684878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-03 00:29:01.684890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-03 00:29:01.684900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-03 00:29:01.684911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-03 00:29:01.684922 | orchestrator | 2026-03-03 00:29:01.684950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:01.684962 | orchestrator | Tuesday 03 March 2026 00:28:57 +0000 (0:00:00.174) 0:00:19.041 ********* 2026-03-03 00:29:01.684976 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI5TQ6YB+Y0FBz7dJ3svT0Wcj+u2qhjm6pvTg66sVfeY5xAb2gnJkb5RYg7i7AMYye4419so9/pgSUG3TuePgUcM698gOteiPPZ4g93glPjX70naU3fMsPVYYJxgWfPxy4Fe5LbEx4qIUdFNIH4jal8WcV70XDCOIuMr6C0R1erZ+/4t3M5VRjH902B2icUKVDIgVYBMsgNHYhBo62no3yUGxBMpDDqVjcunf4JUKzNZ1/5nFdvTnVeIMV1fIcEIMFxDQ74FQeGsNbymvTeI6cb7OtiW6+shGNbVEAse+fClaMuB2RgZrjr/ULBLJY6htKMRaXWUdIyUkFef8aQCRMPiZ9716A3ZMceGHiGITwRe/x3ijVJnXIQYrfBMXAfFJNLRXXETnJO2z7Zntg1B8iJo4yIAZW4NxcE4urtjDbDWoIeBhbiT/UeERk3gRCfQ3ddJYrZgOyMXL1XZ8wkHfukjwU10scCdCWm3dQFlM9GaYYISV1qtfBMz1tMQ6Z23U=) 2026-03-03 00:29:01.684988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFo6M8s5xmsWoBtPK/57vkKG2Ri6X/Vg8Bu5NEyG2/MLdR+OJelqKTgFl3u/mQWYxZIwKumc/2b2657Tk9oRaAM=) 2026-03-03 00:29:01.685000 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8WxaaD0C3YnupQrJNyE1RJqP9BPqAK5mDC5QTeYzzm) 2026-03-03 00:29:01.685011 | orchestrator | 2026-03-03 00:29:01.685022 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:01.685033 | orchestrator | Tuesday 03 March 2026 00:28:58 +0000 (0:00:01.063) 0:00:20.105 ********* 2026-03-03 00:29:01.685044 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkUw7CsTU+IAJBlVdhyBsu+Ole9+GTX67xc3SaZ9mqkW9vRNVXo1oagumbbC9VmZcIUCv2HX9ymAKgy4vuC/HWEgy2HNy6breEdg0GcJ1TwuV49ICLNok202lpdH/AVFkNuaptlS1Obqa+/WiygQbFB2OQmBH4nOgYgrMXydfk3DpN4qopizVVPxVf84VZZeyw2HSpVlBonxo04Kyz9Ar5iLq/z36feYen1H1i/80cO8dLM7wl8y6+cZtM0X6lgPqnhxnlQgOtBrE9mLlj+htJzgUJK21cnuVSZcJp+imgN+uep9nfPuyWCTPxdwqbmX+TcT+jiP6wg75/dYFGgGgqOid1R1/VSPEFs9DZ4zsckvCx0czgj8wSu6xRf6A9QRGEazjZ2vGwuGBBb51996P82zorSCglLyKOmhgO5Y1V3AGD+KfKOAp2ra342WWXtqFGxiHTQGF+A+2SugtcWAHQYe1U4QdPmSn1a2TPcesODv6iei4Qt2uxdVHn5CluuO8=) 2026-03-03 00:29:01.685065 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVX0HZPh7LoWqdx2IFI8IggdnjTo9WntFHPBYJYibJvDOfRxcXfXCM2qfpLhp4HDLM+M642n49cUFbbV8Sijzc=) 2026-03-03 00:29:01.685077 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGzKyth0mjstfxfzbSlHSRPH+XBvxSFjk1DIbxnjxPRN) 2026-03-03 00:29:01.685087 | orchestrator | 2026-03-03 00:29:01.685098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:01.685109 | orchestrator | Tuesday 03 March 2026 00:28:59 +0000 (0:00:01.001) 0:00:21.107 ********* 2026-03-03 00:29:01.685121 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIkG8CF1PyUYe3FIG/N3nFZLjfH/PV374RFnVAfAmpyv) 2026-03-03 00:29:01.685132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD3KsaDJJXHPR9VZQLckjijwAxHXhFAZCLCJNoTlJ3LZaZcoVr8UkQIZd8tTsp092mHgt0xy5k2Wq8q3z5h2ioSKJujoyWI03bFSxvuLfGrqEDLM+DlBinjude1iVT3IoNIe9rewL/B2ieMUJENjSsNrd1n29RSgx199WSx0ISfMpRSHgVb6SlRL7mRIZpewzh/ckY1tg+XXWxN0826TDHZ6eXnufnuZ65jZ4xTnZj1vYO9ZK3TwZPpK5SHwYDwgXSNrc4jEOBJ5kOqvZUDczkkJ+XyxBAmH/I7D/b3BsDVOoU3yz+aEOwWxCtqthB8M4b5lVq7w9RNI/z3X0Ho0Shc+1vtk7SWDu7PTmlxo2Bs/v8VURbrwS2izJS1frv9APDqDBaXxyF0jSRerQZv+BizqYabk0p1GXTkw0EqfWdQovPW7m7B9okz4v5ZbCJWOrco68um2E/k0L+YWgAbvncj0oUNaNCjbs0NLw8iyBhS1MJ1JtcyMP2tM6nL7WaMgWc=) 2026-03-03 00:29:01.685143 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCkf3EOHEP2RFxKO4UegGc4B5N86F9eJPxbbRxdm8OILTijoGcjPA1dH3XNlFCoTn7FYGEPNVvYw6oJ0XIH5BK8=) 2026-03-03 00:29:01.685155 | orchestrator | 2026-03-03 00:29:01.685166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:01.685176 | orchestrator | Tuesday 03 March 2026 00:29:00 +0000 (0:00:01.055) 0:00:22.163 ********* 2026-03-03 00:29:01.685187 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPqLatucOWljuFP26yIFZmyH8dbhzr5c5MSpPyLNBnrH) 2026-03-03 00:29:01.685218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoZVjHoCPwjVhOQLZs4RUkYPvrHKwH87csmXHbpIbBsJnM0RBC3uvX3mimugRGfqdtntPs2lZx7RpOIbZvjYj5ebYzDFRtZdItrOz00sHfy74sA0DwRFfMCa7o/yjyXpxFm82+luxY7zCVNsuO6B/oyrRmdZzW8FpiL5HJJDuiGeDkhdYYqtEfLPVuqETBBAEsUwxihoj30e93tnRFnFTMg4Mpw3okOT8LJJrbuycN4MjONKY+ezDibASlXtxUtMASlrSY6F6/P41dyBJNgUD0YAisdFscx481Toa1xSujrpMhfI17Xo8DkJOFgbBapl5bFVCJvf1lTIcv1oXrOIm1JXZ1QDnV564iIEh7br0vwoDjHIrmQ3dK13urOMF/KfuN63NNVfvL3941MbG05A2baXYaFO5ecgQsX8tUHMREXkYqXNQDq07sU9+fehhdZPRm6k4+gfCkc1wRSUPJNsKi7TtV2LCtb6nIJKxXbslJWk6+hFMVhE6SXr7i9PBVI2M=) 2026-03-03 00:29:06.645997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPKBC2Ax655TlsEnMeO4cMvQfk2dzpNBP8mfpPHvqfYKeVikwmrTfwYuwKxq27wJ8OEKL9W4agOk7siwZS7TJMk=) 2026-03-03 00:29:06.646181 | orchestrator | 2026-03-03 00:29:06.646199 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:06.646212 | orchestrator | Tuesday 03 March 2026 00:29:02 +0000 (0:00:01.072) 0:00:23.235 ********* 2026-03-03 00:29:06.646224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2uFE0y4ctF6w8Ft75pSrj0uWSFcuj4VXMZ6PyihFLIqCijAaFIHbyB/1aKj83wicynd7DPVWgc9iEnZgiSYQY=) 2026-03-03 00:29:06.646238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6QKYJ94I0icNYODjYKqZRGPHg4gzMtnBpPh1b5Clpy/ZzYDwjm9RZEXk0ZyfFlbe9ZrDtQyYEJtkRse0yR8WcQkFa9UaaPp7ji/0vglfb7JI4p14o+d5ykWvuuhlISQe0BmLxo48GZ9fE9hh1ro7dewxnyeTN1wPhB/O8UJ0CZPTVIX8rgUdHXsWWQaG8ptWNt92v6E1bvpNdMPNjixNgsKXRFpWvaoZxKEq9uw1nVqNm3O78P+ZRHGq4dHfeDP0B9QVK6u4t3He1UDrScGb3ECwwe6bb6+gTxGm4ULhnaL2yzvUYdzcQP4LLKj5Z4uzjDYnr08tYaVPJoAE28V6JWsRFhlun9bnaIx3ZoohKxctCWVfUo6+oDvUea9+BB6SwMsiiQW1886BSP4V7WrU5YHlLZSRBcSqP8drT9GiW+usTvonngTqcAAgdi9EbWBJWnj8CFsiWYsVO0eswg+aSkWLvBiFrRPcSVpyKP31l6XznGGazJAsBMXc92BWiLDc=) 2026-03-03 00:29:06.646274 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBsOcaaAHrQbm2uUaooI4+jISDzNcCh8JGliAAHOYq/G) 2026-03-03 00:29:06.646287 | orchestrator | 2026-03-03 00:29:06.646387 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:06.646403 | orchestrator | Tuesday 03 March 2026 00:29:03 +0000 (0:00:01.074) 0:00:24.309 ********* 2026-03-03 00:29:06.646414 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDJirJtz+jNFSNjTlwWbm7f1ScsLR1/muQKxLBztyWlYzYXdmDACKF67wuUH/5ExrnJlBhqLX87jGfybscEYlQk=) 2026-03-03 00:29:06.646426 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCS6qzqjPDbPY8bE7FQ5rN//sHuN6lO04j3xIhFFx4N+LF5MgL/r8xmMXCzg8rM9zM4quNEXLPDm6RtSKH1hlLcYhTGV/gJLIidDILqZimxmjJXtWrtiup3Q02OWa5K1t7HRask9+1xgS8bcQQTgCpkMMHvpC3CM5rqbaSvfUvUhW7LV6jZ/m+/Ng+LZQdd9HCyqnGsiSv4jEHs0yVRAYeTD5WqP2UiGFV1DEZp158mp4zRhJoHJuK7CkCgEddAYyadYyymSNPled937UI5rSx+88sEd/5PYd7oluyuf3GS/jOhVvsGXXktcOuN9TuiQinVNHhciWJzUFwf4Jrby+5v16NtxhX8dH6/NG95usGLJTHk/ZzQlEZ6wFb+LltWr3PTN/vIlMsEDZRF8uvdw4YqGUp0tvC0dvboZcz5FySd7ThkQWBcWvHCpQ6heT5sYJU1xMaOqVX25uhRNMChHKhVC9DeBXrHFNw4rcdk2tKqEpc2K9h1w3CfF+Gpy91qYx0=) 2026-03-03 00:29:06.646438 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINsPNxCtzICUyC83gFk+BAlMCVYbVfp+pI1yD+PIn/Fv) 2026-03-03 00:29:06.646449 | orchestrator | 2026-03-03 00:29:06.646466 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-03 00:29:06.646485 | orchestrator | Tuesday 03 March 2026 00:29:04 +0000 (0:00:01.060) 0:00:25.370 ********* 2026-03-03 00:29:06.646503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqkRJ0wrLHTz/3N/6F6My4a7RDQmP21FAdWQQLu6HxVxOve0rQNKpkJ2cvkTKWpxK0w8vb9ptBsYRSDfCbCooXdcx5nzxMgUPGTd8hkiJh9dmFpn8/9oMW7xYL0YcdRvh8GldSmJnn8geBcCTapl5mGF4F0JZmc6E12CFqbTtprHzvfoAWHsDeYVPXiLOb0uDqz3r90Bo6LyL6W4t83yGYffJ1a8ebeXn869i7Dy0aS1GbYI2uf7qQkQB4A9IvFXuWaUflC7fPueLgpdn7jlTgmXNfz/A4+CokFf7Iujnbj7Gdp9ESCSDHme5PStilvhvrCpnlw5+wzz4TGEpHqat6fQrdtnamIw+Kxy9Ux7w32+MJ7BvMYmzIDLJRVGqiH6aZH2JBXmKhAYM/2z41IjrgDglzaHgfKp/YLiFwj5y/wS2ACjBarvdMCkuitUmqqfy87WHaeK5pv+es8PZyeHseTBz08UAImb0z4Had8zzEv0n3r6DlFcEQpBnzBqsYqe8=) 2026-03-03 00:29:06.646522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJa+Ah8/kJJKYNESev9rWVeRC9g0bqhzC6vT1M7KDOHk51cIiZWi4qpMBO8zzxGKKcbAeL0QJwf+L7rAMvzmaio=) 2026-03-03 00:29:06.646541 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBPo5jdqfr6KXIhJ+WS2q02kFcdq57iGhobnq5+L4pbr) 2026-03-03 00:29:06.646557 | orchestrator | 2026-03-03 00:29:06.646576 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-03 00:29:06.646594 | orchestrator | Tuesday 03 March 2026 00:29:05 +0000 (0:00:01.102) 0:00:26.473 ********* 2026-03-03 00:29:06.646614 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-03 00:29:06.646635 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-03 00:29:06.646654 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-03 00:29:06.646670 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-03 00:29:06.646706 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-03 00:29:06.646720 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-03 00:29:06.646733 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-03 00:29:06.646746 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:29:06.646760 | orchestrator | 2026-03-03 00:29:06.646772 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-03 00:29:06.646786 | orchestrator | Tuesday 03 March 2026 00:29:05 +0000 (0:00:00.205) 0:00:26.678 ********* 2026-03-03 00:29:06.646811 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:29:06.646824 | orchestrator | 2026-03-03 00:29:06.646837 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-03 00:29:06.646850 | orchestrator | Tuesday 03 March 2026 00:29:05 +0000 (0:00:00.065) 0:00:26.743 ********* 2026-03-03 00:29:06.646863 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:29:06.646876 | orchestrator | 2026-03-03 00:29:06.646889 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-03 00:29:06.646902 | orchestrator | Tuesday 03 March 2026 00:29:05 +0000 (0:00:00.068) 0:00:26.812 ********* 2026-03-03 00:29:06.646916 | orchestrator | changed: [testbed-manager] 2026-03-03 00:29:06.646928 | orchestrator | 2026-03-03 00:29:06.646939 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:29:06.646950 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 00:29:06.646963 | orchestrator | 2026-03-03 00:29:06.646974 | orchestrator | 2026-03-03 00:29:06.646985 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:29:06.646996 | orchestrator | Tuesday 03 March 2026 00:29:06 +0000 (0:00:00.829) 0:00:27.641 ********* 2026-03-03 00:29:06.647007 | orchestrator | =============================================================================== 2026-03-03 00:29:06.647018 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.81s 2026-03-03 00:29:06.647029 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2026-03-03 00:29:06.647041 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-03-03 00:29:06.647052 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-03 00:29:06.647062 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-03 00:29:06.647073 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-03 00:29:06.647084 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-03 00:29:06.647095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-03 00:29:06.647106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-03 00:29:06.647117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-03 00:29:06.647128 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-03 00:29:06.647138 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-03 00:29:06.647149 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-03 00:29:06.647169 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-03 00:29:06.647180 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-03 00:29:06.647191 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-03 00:29:06.647202 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.83s 2026-03-03 00:29:06.647212 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.21s 2026-03-03 00:29:06.647223 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-03 00:29:06.647235 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-03 00:29:07.138358 | orchestrator | + osism apply squid 2026-03-03 00:29:19.271901 | orchestrator | 2026-03-03 00:29:19 | INFO  | Prepare task for execution of squid. 2026-03-03 00:29:19.344278 | orchestrator | 2026-03-03 00:29:19 | INFO  | Task 9a29b693-cdc7-40ff-8d10-5e75ce2a59b1 (squid) was prepared for execution. 2026-03-03 00:29:19.344438 | orchestrator | 2026-03-03 00:29:19 | INFO  | It takes a moment until task 9a29b693-cdc7-40ff-8d10-5e75ce2a59b1 (squid) has been started and output is visible here. 2026-03-03 00:31:25.342970 | orchestrator | 2026-03-03 00:31:25.343158 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-03 00:31:25.343186 | orchestrator | 2026-03-03 00:31:25.343257 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-03 00:31:25.343277 | orchestrator | Tuesday 03 March 2026 00:29:23 +0000 (0:00:00.166) 0:00:00.166 ********* 2026-03-03 00:31:25.343295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-03 00:31:25.343314 | orchestrator | 2026-03-03 00:31:25.343331 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-03 00:31:25.343347 | orchestrator | Tuesday 03 March 2026 00:29:24 +0000 (0:00:00.088) 0:00:00.254 ********* 2026-03-03 00:31:25.343362 | orchestrator | ok: [testbed-manager] 2026-03-03 00:31:25.343381 | orchestrator | 2026-03-03 00:31:25.343397 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-03 00:31:25.343413 | orchestrator | Tuesday 03 March 2026 00:29:25 +0000 (0:00:01.557) 0:00:01.812 ********* 2026-03-03 00:31:25.343430 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-03 00:31:25.343446 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-03 00:31:25.343463 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-03 00:31:25.343481 | orchestrator | 2026-03-03 00:31:25.343498 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-03 00:31:25.343515 | orchestrator | Tuesday 03 March 2026 00:29:26 +0000 (0:00:01.205) 0:00:03.017 ********* 2026-03-03 00:31:25.343533 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-03 00:31:25.343549 | orchestrator | 2026-03-03 00:31:25.343566 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-03 00:31:25.343582 | orchestrator | Tuesday 03 March 2026 00:29:27 +0000 (0:00:01.142) 0:00:04.160 ********* 2026-03-03 00:31:25.343599 | orchestrator | ok: [testbed-manager] 2026-03-03 00:31:25.343615 | orchestrator | 2026-03-03 00:31:25.343631 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-03 00:31:25.343648 | orchestrator | Tuesday 03 March 2026 00:29:28 +0000 (0:00:00.372) 0:00:04.532 ********* 2026-03-03 00:31:25.343664 | orchestrator | changed: [testbed-manager] 2026-03-03 00:31:25.343682 | orchestrator | 2026-03-03 00:31:25.343699 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-03 00:31:25.343717 | orchestrator | Tuesday 03 March 2026 00:29:29 +0000 (0:00:00.961) 0:00:05.494 ********* 2026-03-03 00:31:25.343735 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-03 00:31:25.343752 | orchestrator | ok: [testbed-manager] 2026-03-03 00:31:25.343770 | orchestrator | 2026-03-03 00:31:25.343787 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-03 00:31:25.343802 | orchestrator | Tuesday 03 March 2026 00:30:08 +0000 (0:00:39.298) 0:00:44.792 ********* 2026-03-03 00:31:25.343819 | orchestrator | changed: [testbed-manager] 2026-03-03 00:31:25.343837 | orchestrator | 2026-03-03 00:31:25.343875 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-03 00:31:25.343893 | orchestrator | Tuesday 03 March 2026 00:30:24 +0000 (0:00:15.703) 0:01:00.495 ********* 2026-03-03 00:31:25.343911 | orchestrator | Pausing for 60 seconds 2026-03-03 00:31:25.343928 | orchestrator | changed: [testbed-manager] 2026-03-03 00:31:25.343946 | orchestrator | 2026-03-03 00:31:25.343962 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-03 00:31:25.343978 | orchestrator | Tuesday 03 March 2026 00:31:24 +0000 (0:01:00.105) 0:02:00.601 ********* 2026-03-03 00:31:25.343994 | orchestrator | ok: [testbed-manager] 2026-03-03 00:31:25.344009 | orchestrator | 2026-03-03 00:31:25.344025 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-03 00:31:25.344077 | orchestrator | Tuesday 03 March 2026 00:31:24 +0000 (0:00:00.064) 0:02:00.665 ********* 2026-03-03 00:31:25.344095 | orchestrator | changed: [testbed-manager] 2026-03-03 00:31:25.344184 | orchestrator | 2026-03-03 00:31:25.344200 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:31:25.344216 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:31:25.344232 | orchestrator | 2026-03-03 00:31:25.344247 | orchestrator | 2026-03-03 00:31:25.344264 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:31:25.344280 | orchestrator | Tuesday 03 March 2026 00:31:25 +0000 (0:00:00.665) 0:02:01.331 ********* 2026-03-03 00:31:25.344297 | orchestrator | =============================================================================== 2026-03-03 00:31:25.344313 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-03-03 00:31:25.344330 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 39.30s 2026-03-03 00:31:25.344347 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.70s 2026-03-03 00:31:25.344362 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.56s 2026-03-03 00:31:25.344379 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2026-03-03 00:31:25.344395 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.14s 2026-03-03 00:31:25.344412 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2026-03-03 00:31:25.344428 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2026-03-03 00:31:25.344445 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-03-03 00:31:25.344462 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-03 00:31:25.344479 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-03 00:31:25.634905 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-03 00:31:25.635012 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-03 00:31:25.641759 | orchestrator | + set -e 2026-03-03 00:31:25.641862 | orchestrator | + NAMESPACE=kolla 2026-03-03 00:31:25.641879 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-03 00:31:25.648372 | orchestrator | ++ semver latest 9.0.0 2026-03-03 00:31:25.700992 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-03 00:31:25.701181 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-03 00:31:25.701567 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-03 00:31:37.865885 | orchestrator | 2026-03-03 00:31:37 | INFO  | Prepare task for execution of operator. 2026-03-03 00:31:37.941426 | orchestrator | 2026-03-03 00:31:37 | INFO  | Task 275c99ed-1487-452a-9724-7ecd742085f0 (operator) was prepared for execution. 2026-03-03 00:31:37.941523 | orchestrator | 2026-03-03 00:31:37 | INFO  | It takes a moment until task 275c99ed-1487-452a-9724-7ecd742085f0 (operator) has been started and output is visible here. 2026-03-03 00:31:54.724604 | orchestrator | 2026-03-03 00:31:54.724699 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-03 00:31:54.724712 | orchestrator | 2026-03-03 00:31:54.724720 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 00:31:54.724728 | orchestrator | Tuesday 03 March 2026 00:31:42 +0000 (0:00:00.167) 0:00:00.167 ********* 2026-03-03 00:31:54.724736 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:31:54.724745 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:31:54.724753 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:31:54.724760 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:31:54.724767 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:31:54.724777 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:31:54.724785 | orchestrator | 2026-03-03 00:31:54.724792 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-03 00:31:54.724823 | orchestrator | Tuesday 03 March 2026 00:31:45 +0000 (0:00:03.389) 0:00:03.557 ********* 2026-03-03 00:31:54.724830 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:31:54.724837 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:31:54.724844 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:31:54.724852 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:31:54.724859 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:31:54.724866 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:31:54.724873 | orchestrator | 2026-03-03 00:31:54.724880 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-03 00:31:54.724887 | orchestrator | 2026-03-03 00:31:54.724894 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-03 00:31:54.724902 | orchestrator | Tuesday 03 March 2026 00:31:46 +0000 (0:00:00.828) 0:00:04.385 ********* 2026-03-03 00:31:54.724922 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:31:54.724929 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:31:54.724943 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:31:54.724951 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:31:54.724958 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:31:54.724965 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:31:54.724972 | orchestrator | 2026-03-03 00:31:54.724979 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-03 00:31:54.724986 | orchestrator | Tuesday 03 March 2026 00:31:46 +0000 (0:00:00.161) 0:00:04.547 ********* 2026-03-03 00:31:54.724993 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:31:54.725000 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:31:54.725007 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:31:54.725014 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:31:54.725090 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:31:54.725104 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:31:54.725116 | orchestrator | 2026-03-03 00:31:54.725128 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-03 00:31:54.725139 | orchestrator | Tuesday 03 March 2026 00:31:46 +0000 (0:00:00.177) 0:00:04.725 ********* 2026-03-03 00:31:54.725151 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:31:54.725163 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:31:54.725175 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:31:54.725189 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:31:54.725201 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:31:54.725212 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:31:54.725224 | orchestrator | 2026-03-03 00:31:54.725237 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-03 00:31:54.725248 | orchestrator | Tuesday 03 March 2026 00:31:47 +0000 (0:00:00.634) 0:00:05.360 ********* 2026-03-03 00:31:54.725260 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:31:54.725274 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:31:54.725287 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:31:54.725299 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:31:54.725313 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:31:54.725326 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:31:54.725339 | orchestrator | 2026-03-03 00:31:54.725353 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-03 00:31:54.725366 | orchestrator | Tuesday 03 March 2026 00:31:48 +0000 (0:00:00.968) 0:00:06.329 ********* 2026-03-03 00:31:54.725379 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-03 00:31:54.725393 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-03 00:31:54.725407 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-03 00:31:54.725420 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-03 00:31:54.725433 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-03 00:31:54.725445 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-03 00:31:54.725458 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-03 00:31:54.725472 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-03 00:31:54.725498 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-03 00:31:54.725512 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-03 00:31:54.725525 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-03 00:31:54.725538 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-03 00:31:54.725551 | orchestrator | 2026-03-03 00:31:54.725559 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-03 00:31:54.725568 | orchestrator | Tuesday 03 March 2026 00:31:49 +0000 (0:00:01.308) 0:00:07.637 ********* 2026-03-03 00:31:54.725580 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:31:54.725592 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:31:54.725603 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:31:54.725615 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:31:54.725628 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:31:54.725641 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:31:54.725653 | orchestrator | 2026-03-03 00:31:54.725665 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-03 00:31:54.725674 | orchestrator | Tuesday 03 March 2026 00:31:50 +0000 (0:00:01.358) 0:00:08.995 ********* 2026-03-03 00:31:54.725681 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:31:54.725689 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:31:54.725696 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:31:54.725703 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:31:54.725711 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:31:54.725737 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-03 00:31:54.725745 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-03 00:31:54.725752 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-03 00:31:54.725760 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-03 00:31:54.725767 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-03 00:31:54.725774 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-03 00:31:54.725781 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-03 00:31:54.725787 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:31:54.725795 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-03 00:31:54.725802 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-03 00:31:54.725809 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-03 00:31:54.725816 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:31:54.725823 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:31:54.725830 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:31:54.725837 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:31:54.725844 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-03 00:31:54.725851 | orchestrator | 2026-03-03 00:31:54.725858 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-03 00:31:54.725866 | orchestrator | Tuesday 03 March 2026 00:31:52 +0000 (0:00:01.514) 0:00:10.509 ********* 2026-03-03 00:31:54.725873 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:54.725881 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:54.725888 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:54.725901 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:54.725909 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:54.725916 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:54.725923 | orchestrator | 2026-03-03 00:31:54.725930 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-03 00:31:54.725943 | orchestrator | Tuesday 03 March 2026 00:31:52 +0000 (0:00:00.166) 0:00:10.676 ********* 2026-03-03 00:31:54.725951 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:54.725958 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:54.725965 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:54.725972 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:54.725980 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:54.725987 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:54.725994 | orchestrator | 2026-03-03 00:31:54.726001 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-03 00:31:54.726009 | orchestrator | Tuesday 03 March 2026 00:31:52 +0000 (0:00:00.189) 0:00:10.865 ********* 2026-03-03 00:31:54.726089 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:31:54.726100 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:31:54.726107 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:31:54.726114 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:31:54.726121 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:31:54.726128 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:31:54.726135 | orchestrator | 2026-03-03 00:31:54.726143 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-03 00:31:54.726150 | orchestrator | Tuesday 03 March 2026 00:31:53 +0000 (0:00:00.566) 0:00:11.432 ********* 2026-03-03 00:31:54.726157 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:54.726164 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:54.726198 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:54.726206 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:54.726213 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:54.726220 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:54.726227 | orchestrator | 2026-03-03 00:31:54.726235 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-03 00:31:54.726242 | orchestrator | Tuesday 03 March 2026 00:31:53 +0000 (0:00:00.170) 0:00:11.602 ********* 2026-03-03 00:31:54.726249 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-03 00:31:54.726256 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 00:31:54.726264 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:31:54.726271 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:31:54.726278 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 00:31:54.726285 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:31:54.726292 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 00:31:54.726300 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-03 00:31:54.726309 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:31:54.726321 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:31:54.726340 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 00:31:54.726355 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:31:54.726366 | orchestrator | 2026-03-03 00:31:54.726376 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-03 00:31:54.726387 | orchestrator | Tuesday 03 March 2026 00:31:54 +0000 (0:00:00.844) 0:00:12.447 ********* 2026-03-03 00:31:54.726397 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:54.726407 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:54.726419 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:54.726429 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:54.726441 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:54.726451 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:54.726462 | orchestrator | 2026-03-03 00:31:54.726473 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-03 00:31:54.726484 | orchestrator | Tuesday 03 March 2026 00:31:54 +0000 (0:00:00.181) 0:00:12.628 ********* 2026-03-03 00:31:54.726495 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:54.726507 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:54.726518 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:54.726538 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:54.726562 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:56.115255 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:56.115389 | orchestrator | 2026-03-03 00:31:56.115406 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-03 00:31:56.115420 | orchestrator | Tuesday 03 March 2026 00:31:54 +0000 (0:00:00.157) 0:00:12.786 ********* 2026-03-03 00:31:56.115431 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:56.115442 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:56.115453 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:56.115464 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:56.115475 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:56.115485 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:56.115496 | orchestrator | 2026-03-03 00:31:56.115507 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-03 00:31:56.115518 | orchestrator | Tuesday 03 March 2026 00:31:54 +0000 (0:00:00.154) 0:00:12.940 ********* 2026-03-03 00:31:56.115529 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:31:56.115540 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:31:56.115551 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:31:56.115561 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:31:56.115572 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:31:56.115583 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:31:56.115593 | orchestrator | 2026-03-03 00:31:56.115604 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-03 00:31:56.115615 | orchestrator | Tuesday 03 March 2026 00:31:55 +0000 (0:00:00.746) 0:00:13.687 ********* 2026-03-03 00:31:56.115626 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:31:56.115636 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:31:56.115647 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:31:56.115658 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:31:56.115668 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:31:56.115679 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:31:56.115690 | orchestrator | 2026-03-03 00:31:56.115700 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:31:56.115713 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 00:31:56.115748 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 00:31:56.115760 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 00:31:56.115772 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 00:31:56.115783 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 00:31:56.115796 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 00:31:56.115809 | orchestrator | 2026-03-03 00:31:56.115822 | orchestrator | 2026-03-03 00:31:56.115834 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:31:56.115846 | orchestrator | Tuesday 03 March 2026 00:31:55 +0000 (0:00:00.235) 0:00:13.923 ********* 2026-03-03 00:31:56.115859 | orchestrator | =============================================================================== 2026-03-03 00:31:56.115871 | orchestrator | Gathering Facts --------------------------------------------------------- 3.39s 2026-03-03 00:31:56.115884 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.51s 2026-03-03 00:31:56.115897 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2026-03-03 00:31:56.115931 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.31s 2026-03-03 00:31:56.115943 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.97s 2026-03-03 00:31:56.115956 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.84s 2026-03-03 00:31:56.115969 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-03-03 00:31:56.115981 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.75s 2026-03-03 00:31:56.115994 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-03-03 00:31:56.116006 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-03-03 00:31:56.116043 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-03-03 00:31:56.116058 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-03-03 00:31:56.116071 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-03-03 00:31:56.116084 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-03-03 00:31:56.116096 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-03-03 00:31:56.116109 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-03 00:31:56.116122 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-03 00:31:56.116134 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-03-03 00:31:56.116147 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-03 00:31:56.453909 | orchestrator | + osism apply --environment custom facts 2026-03-03 00:31:58.425930 | orchestrator | 2026-03-03 00:31:58 | INFO  | Trying to run play facts in environment custom 2026-03-03 00:32:08.434981 | orchestrator | 2026-03-03 00:32:08 | INFO  | Prepare task for execution of facts. 2026-03-03 00:32:08.512262 | orchestrator | 2026-03-03 00:32:08 | INFO  | Task 81c66381-00c2-4357-b428-dfdf0c552194 (facts) was prepared for execution. 2026-03-03 00:32:08.512384 | orchestrator | 2026-03-03 00:32:08 | INFO  | It takes a moment until task 81c66381-00c2-4357-b428-dfdf0c552194 (facts) has been started and output is visible here. 2026-03-03 00:32:54.661767 | orchestrator | 2026-03-03 00:32:54.661854 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-03 00:32:54.661862 | orchestrator | 2026-03-03 00:32:54.661867 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-03 00:32:54.661872 | orchestrator | Tuesday 03 March 2026 00:32:12 +0000 (0:00:00.070) 0:00:00.070 ********* 2026-03-03 00:32:54.661892 | orchestrator | ok: [testbed-manager] 2026-03-03 00:32:54.661898 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:32:54.661903 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:32:54.661907 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:32:54.661911 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:32:54.661915 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:32:54.661920 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:32:54.661924 | orchestrator | 2026-03-03 00:32:54.661928 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-03 00:32:54.661933 | orchestrator | Tuesday 03 March 2026 00:32:13 +0000 (0:00:01.272) 0:00:01.343 ********* 2026-03-03 00:32:54.661937 | orchestrator | ok: [testbed-manager] 2026-03-03 00:32:54.661941 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:32:54.661945 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:32:54.661949 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:32:54.661953 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:32:54.661969 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:32:54.661973 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:32:54.661993 | orchestrator | 2026-03-03 00:32:54.661997 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-03 00:32:54.662001 | orchestrator | 2026-03-03 00:32:54.662005 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-03 00:32:54.662009 | orchestrator | Tuesday 03 March 2026 00:32:15 +0000 (0:00:01.284) 0:00:02.627 ********* 2026-03-03 00:32:54.662013 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662048 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662053 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662057 | orchestrator | 2026-03-03 00:32:54.662061 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-03 00:32:54.662066 | orchestrator | Tuesday 03 March 2026 00:32:15 +0000 (0:00:00.117) 0:00:02.745 ********* 2026-03-03 00:32:54.662084 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662089 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662092 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662096 | orchestrator | 2026-03-03 00:32:54.662100 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-03 00:32:54.662104 | orchestrator | Tuesday 03 March 2026 00:32:15 +0000 (0:00:00.232) 0:00:02.978 ********* 2026-03-03 00:32:54.662108 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662112 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662116 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662120 | orchestrator | 2026-03-03 00:32:54.662124 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-03 00:32:54.662127 | orchestrator | Tuesday 03 March 2026 00:32:15 +0000 (0:00:00.277) 0:00:03.256 ********* 2026-03-03 00:32:54.662132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:32:54.662137 | orchestrator | 2026-03-03 00:32:54.662141 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-03 00:32:54.662148 | orchestrator | Tuesday 03 March 2026 00:32:16 +0000 (0:00:00.132) 0:00:03.388 ********* 2026-03-03 00:32:54.662155 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662161 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662168 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662174 | orchestrator | 2026-03-03 00:32:54.662181 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-03 00:32:54.662187 | orchestrator | Tuesday 03 March 2026 00:32:16 +0000 (0:00:00.453) 0:00:03.841 ********* 2026-03-03 00:32:54.662194 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:32:54.662202 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:32:54.662206 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:32:54.662210 | orchestrator | 2026-03-03 00:32:54.662214 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-03 00:32:54.662217 | orchestrator | Tuesday 03 March 2026 00:32:16 +0000 (0:00:00.126) 0:00:03.968 ********* 2026-03-03 00:32:54.662221 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:32:54.662225 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:32:54.662229 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:32:54.662233 | orchestrator | 2026-03-03 00:32:54.662237 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-03 00:32:54.662241 | orchestrator | Tuesday 03 March 2026 00:32:17 +0000 (0:00:01.059) 0:00:05.027 ********* 2026-03-03 00:32:54.662245 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662249 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662253 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662257 | orchestrator | 2026-03-03 00:32:54.662261 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-03 00:32:54.662265 | orchestrator | Tuesday 03 March 2026 00:32:18 +0000 (0:00:00.491) 0:00:05.519 ********* 2026-03-03 00:32:54.662269 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:32:54.662273 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:32:54.662277 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:32:54.662285 | orchestrator | 2026-03-03 00:32:54.662289 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-03 00:32:54.662293 | orchestrator | Tuesday 03 March 2026 00:32:19 +0000 (0:00:01.065) 0:00:06.585 ********* 2026-03-03 00:32:54.662297 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:32:54.662301 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:32:54.662305 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:32:54.662309 | orchestrator | 2026-03-03 00:32:54.662313 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-03 00:32:54.662317 | orchestrator | Tuesday 03 March 2026 00:32:36 +0000 (0:00:16.915) 0:00:23.500 ********* 2026-03-03 00:32:54.662321 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:32:54.662325 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:32:54.662329 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:32:54.662333 | orchestrator | 2026-03-03 00:32:54.662337 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-03 00:32:54.662353 | orchestrator | Tuesday 03 March 2026 00:32:36 +0000 (0:00:00.108) 0:00:23.609 ********* 2026-03-03 00:32:54.662357 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:32:54.662361 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:32:54.662365 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:32:54.662370 | orchestrator | 2026-03-03 00:32:54.662374 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-03 00:32:54.662378 | orchestrator | Tuesday 03 March 2026 00:32:44 +0000 (0:00:08.122) 0:00:31.731 ********* 2026-03-03 00:32:54.662382 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662386 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662390 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662394 | orchestrator | 2026-03-03 00:32:54.662398 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-03 00:32:54.662402 | orchestrator | Tuesday 03 March 2026 00:32:44 +0000 (0:00:00.548) 0:00:32.280 ********* 2026-03-03 00:32:54.662406 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-03 00:32:54.662410 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-03 00:32:54.662414 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-03 00:32:54.662418 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-03 00:32:54.662422 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-03 00:32:54.662434 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-03 00:32:54.662439 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-03 00:32:54.662447 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-03 00:32:54.662451 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-03 00:32:54.662456 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-03 00:32:54.662459 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-03 00:32:54.662463 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-03 00:32:54.662467 | orchestrator | 2026-03-03 00:32:54.662471 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-03 00:32:54.662475 | orchestrator | Tuesday 03 March 2026 00:32:48 +0000 (0:00:03.575) 0:00:35.855 ********* 2026-03-03 00:32:54.662479 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662483 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662487 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662491 | orchestrator | 2026-03-03 00:32:54.662495 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-03 00:32:54.662499 | orchestrator | 2026-03-03 00:32:54.662503 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-03 00:32:54.662507 | orchestrator | Tuesday 03 March 2026 00:32:49 +0000 (0:00:01.381) 0:00:37.237 ********* 2026-03-03 00:32:54.662516 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:32:54.662519 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:32:54.662523 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:32:54.662527 | orchestrator | ok: [testbed-manager] 2026-03-03 00:32:54.662531 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:32:54.662561 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:32:54.662565 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:32:54.662569 | orchestrator | 2026-03-03 00:32:54.662573 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:32:54.662578 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:32:54.662583 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:32:54.662588 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:32:54.662592 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:32:54.662596 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:32:54.662600 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:32:54.662604 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:32:54.662608 | orchestrator | 2026-03-03 00:32:54.662612 | orchestrator | 2026-03-03 00:32:54.662616 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:32:54.662620 | orchestrator | Tuesday 03 March 2026 00:32:54 +0000 (0:00:04.767) 0:00:42.005 ********* 2026-03-03 00:32:54.662624 | orchestrator | =============================================================================== 2026-03-03 00:32:54.662628 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.92s 2026-03-03 00:32:54.662632 | orchestrator | Install required packages (Debian) -------------------------------------- 8.12s 2026-03-03 00:32:54.662636 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2026-03-03 00:32:54.662640 | orchestrator | Copy fact files --------------------------------------------------------- 3.58s 2026-03-03 00:32:54.662644 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.38s 2026-03-03 00:32:54.662648 | orchestrator | Copy fact file ---------------------------------------------------------- 1.28s 2026-03-03 00:32:54.662655 | orchestrator | Create custom facts directory ------------------------------------------- 1.27s 2026-03-03 00:32:54.853149 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-03-03 00:32:54.853240 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-03-03 00:32:54.853251 | orchestrator | Create custom facts directory ------------------------------------------- 0.55s 2026-03-03 00:32:54.853259 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-03-03 00:32:54.853266 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-03 00:32:54.853273 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.28s 2026-03-03 00:32:54.853281 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-03-03 00:32:54.853289 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-03 00:32:54.853296 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-03 00:32:54.853323 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-03-03 00:32:54.853348 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-03-03 00:32:55.146227 | orchestrator | + osism apply bootstrap 2026-03-03 00:33:07.175916 | orchestrator | 2026-03-03 00:33:07 | INFO  | Prepare task for execution of bootstrap. 2026-03-03 00:33:07.248368 | orchestrator | 2026-03-03 00:33:07 | INFO  | Task 191a3fcc-e2cd-4978-98ae-b1f2fcd51c50 (bootstrap) was prepared for execution. 2026-03-03 00:33:07.248489 | orchestrator | 2026-03-03 00:33:07 | INFO  | It takes a moment until task 191a3fcc-e2cd-4978-98ae-b1f2fcd51c50 (bootstrap) has been started and output is visible here. 2026-03-03 00:33:23.741880 | orchestrator | 2026-03-03 00:33:23.742005 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-03 00:33:23.742080 | orchestrator | 2026-03-03 00:33:23.742095 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-03 00:33:23.742107 | orchestrator | Tuesday 03 March 2026 00:33:11 +0000 (0:00:00.138) 0:00:00.138 ********* 2026-03-03 00:33:23.742118 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:23.742131 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:23.742142 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:23.742154 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:23.742165 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:23.742175 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:23.742186 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:23.742197 | orchestrator | 2026-03-03 00:33:23.742209 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-03 00:33:23.742220 | orchestrator | 2026-03-03 00:33:23.742231 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-03 00:33:23.742242 | orchestrator | Tuesday 03 March 2026 00:33:11 +0000 (0:00:00.240) 0:00:00.379 ********* 2026-03-03 00:33:23.742254 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:23.742265 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:23.742277 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:23.742288 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:23.742299 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:23.742309 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:23.742320 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:23.742331 | orchestrator | 2026-03-03 00:33:23.742342 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-03 00:33:23.742355 | orchestrator | 2026-03-03 00:33:23.742368 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-03 00:33:23.742381 | orchestrator | Tuesday 03 March 2026 00:33:15 +0000 (0:00:03.802) 0:00:04.181 ********* 2026-03-03 00:33:23.742393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 00:33:23.742407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 00:33:23.742419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-03 00:33:23.742431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 00:33:23.742444 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-03 00:33:23.742456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-03 00:33:23.742468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-03 00:33:23.742480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-03 00:33:23.742493 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-03 00:33:23.742505 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-03 00:33:23.742517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-03 00:33:23.742529 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-03 00:33:23.742541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-03 00:33:23.742554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-03 00:33:23.742567 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-03 00:33:23.742604 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-03 00:33:23.742617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-03 00:33:23.742629 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-03 00:33:23.742641 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:33:23.742654 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-03 00:33:23.742666 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-03 00:33:23.742683 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-03 00:33:23.742701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-03 00:33:23.742714 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-03 00:33:23.742725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-03 00:33:23.742735 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:33:23.742747 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-03 00:33:23.742757 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-03 00:33:23.742768 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-03 00:33:23.742779 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-03 00:33:23.742789 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-03 00:33:23.742800 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-03 00:33:23.742837 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:23.742849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-03 00:33:23.742860 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-03 00:33:23.742871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-03 00:33:23.742881 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:33:23.742892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-03 00:33:23.742903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-03 00:33:23.742914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-03 00:33:23.742925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-03 00:33:23.742936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-03 00:33:23.742947 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-03 00:33:23.742957 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-03 00:33:23.742968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-03 00:33:23.742979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-03 00:33:23.743010 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-03 00:33:23.743022 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:33:23.743033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-03 00:33:23.743044 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-03 00:33:23.743055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-03 00:33:23.743065 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:33:23.743076 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-03 00:33:23.743087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-03 00:33:23.743098 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-03 00:33:23.743108 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:33:23.743119 | orchestrator | 2026-03-03 00:33:23.743130 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-03 00:33:23.743141 | orchestrator | 2026-03-03 00:33:23.743152 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-03 00:33:23.743162 | orchestrator | Tuesday 03 March 2026 00:33:16 +0000 (0:00:00.578) 0:00:04.760 ********* 2026-03-03 00:33:23.743174 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:23.743193 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:23.743204 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:23.743215 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:23.743225 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:23.743236 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:23.743246 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:23.743257 | orchestrator | 2026-03-03 00:33:23.743268 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-03 00:33:23.743279 | orchestrator | Tuesday 03 March 2026 00:33:17 +0000 (0:00:01.296) 0:00:06.057 ********* 2026-03-03 00:33:23.743290 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:23.743300 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:23.743311 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:23.743322 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:23.743332 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:23.743343 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:23.743353 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:23.743364 | orchestrator | 2026-03-03 00:33:23.743375 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-03 00:33:23.743401 | orchestrator | Tuesday 03 March 2026 00:33:18 +0000 (0:00:01.351) 0:00:07.409 ********* 2026-03-03 00:33:23.743413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:33:23.743437 | orchestrator | 2026-03-03 00:33:23.743449 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-03 00:33:23.743460 | orchestrator | Tuesday 03 March 2026 00:33:19 +0000 (0:00:00.282) 0:00:07.691 ********* 2026-03-03 00:33:23.743471 | orchestrator | changed: [testbed-manager] 2026-03-03 00:33:23.743482 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:33:23.743492 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:33:23.743503 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:23.743514 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:33:23.743524 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:23.743535 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:23.743546 | orchestrator | 2026-03-03 00:33:23.743557 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-03 00:33:23.743567 | orchestrator | Tuesday 03 March 2026 00:33:21 +0000 (0:00:02.056) 0:00:09.748 ********* 2026-03-03 00:33:23.743578 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:23.743591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:33:23.743603 | orchestrator | 2026-03-03 00:33:23.743615 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-03 00:33:23.743626 | orchestrator | Tuesday 03 March 2026 00:33:21 +0000 (0:00:00.251) 0:00:09.999 ********* 2026-03-03 00:33:23.743636 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:33:23.743647 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:33:23.743658 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:23.743668 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:33:23.743679 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:23.743706 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:23.743718 | orchestrator | 2026-03-03 00:33:23.743729 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-03 00:33:23.743740 | orchestrator | Tuesday 03 March 2026 00:33:22 +0000 (0:00:01.099) 0:00:11.099 ********* 2026-03-03 00:33:23.743751 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:23.743762 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:23.743773 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:33:23.743784 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:23.743794 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:33:23.743805 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:23.743867 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:33:23.743886 | orchestrator | 2026-03-03 00:33:23.743904 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-03 00:33:23.743930 | orchestrator | Tuesday 03 March 2026 00:33:23 +0000 (0:00:00.645) 0:00:11.745 ********* 2026-03-03 00:33:23.743945 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:33:23.743956 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:33:23.743967 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:33:23.743977 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:33:23.743988 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:33:23.743999 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:33:23.744010 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:23.744021 | orchestrator | 2026-03-03 00:33:23.744031 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-03 00:33:23.744043 | orchestrator | Tuesday 03 March 2026 00:33:23 +0000 (0:00:00.472) 0:00:12.218 ********* 2026-03-03 00:33:23.744054 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:33:23.744064 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:33:23.744084 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:33:36.475546 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:36.475648 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:33:36.475663 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:33:36.475674 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:33:36.475686 | orchestrator | 2026-03-03 00:33:36.475698 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-03 00:33:36.475711 | orchestrator | Tuesday 03 March 2026 00:33:23 +0000 (0:00:00.189) 0:00:12.407 ********* 2026-03-03 00:33:36.475723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:33:36.475752 | orchestrator | 2026-03-03 00:33:36.475764 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-03 00:33:36.475776 | orchestrator | Tuesday 03 March 2026 00:33:24 +0000 (0:00:00.264) 0:00:12.672 ********* 2026-03-03 00:33:36.475884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:33:36.475896 | orchestrator | 2026-03-03 00:33:36.475908 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-03 00:33:36.475919 | orchestrator | Tuesday 03 March 2026 00:33:24 +0000 (0:00:00.374) 0:00:13.046 ********* 2026-03-03 00:33:36.475931 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.475943 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.475954 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.475965 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.475976 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.475986 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.475997 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.476008 | orchestrator | 2026-03-03 00:33:36.476020 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-03 00:33:36.476033 | orchestrator | Tuesday 03 March 2026 00:33:25 +0000 (0:00:01.494) 0:00:14.540 ********* 2026-03-03 00:33:36.476047 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:33:36.476059 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:33:36.476072 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:33:36.476084 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:36.476097 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:33:36.476109 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:33:36.476121 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:33:36.476133 | orchestrator | 2026-03-03 00:33:36.476146 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-03 00:33:36.476198 | orchestrator | Tuesday 03 March 2026 00:33:26 +0000 (0:00:00.198) 0:00:14.738 ********* 2026-03-03 00:33:36.476212 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.476224 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.476237 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.476249 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.476262 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.476277 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.476297 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.476315 | orchestrator | 2026-03-03 00:33:36.476333 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-03 00:33:36.476353 | orchestrator | Tuesday 03 March 2026 00:33:26 +0000 (0:00:00.557) 0:00:15.296 ********* 2026-03-03 00:33:36.476373 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:33:36.476392 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:33:36.476412 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:33:36.476431 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:36.476450 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:33:36.476463 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:33:36.476474 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:33:36.476485 | orchestrator | 2026-03-03 00:33:36.476496 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-03 00:33:36.476508 | orchestrator | Tuesday 03 March 2026 00:33:26 +0000 (0:00:00.265) 0:00:15.561 ********* 2026-03-03 00:33:36.476520 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:33:36.476530 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.476541 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:33:36.476552 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:33:36.476562 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:36.476573 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:36.476584 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:36.476594 | orchestrator | 2026-03-03 00:33:36.476605 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-03 00:33:36.476616 | orchestrator | Tuesday 03 March 2026 00:33:27 +0000 (0:00:00.565) 0:00:16.127 ********* 2026-03-03 00:33:36.476627 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.476637 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:33:36.476648 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:33:36.476659 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:33:36.476669 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:36.476680 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:36.476690 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:36.476701 | orchestrator | 2026-03-03 00:33:36.476737 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-03 00:33:36.476760 | orchestrator | Tuesday 03 March 2026 00:33:28 +0000 (0:00:01.144) 0:00:17.271 ********* 2026-03-03 00:33:36.476771 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.476808 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.476820 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.476831 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.476842 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.476853 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.476863 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.476874 | orchestrator | 2026-03-03 00:33:36.476885 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-03 00:33:36.476896 | orchestrator | Tuesday 03 March 2026 00:33:29 +0000 (0:00:01.060) 0:00:18.331 ********* 2026-03-03 00:33:36.476928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:33:36.476940 | orchestrator | 2026-03-03 00:33:36.476951 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-03 00:33:36.476973 | orchestrator | Tuesday 03 March 2026 00:33:30 +0000 (0:00:00.318) 0:00:18.649 ********* 2026-03-03 00:33:36.476983 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:36.476994 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:33:36.477005 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:33:36.477016 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:36.477026 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:33:36.477037 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:36.477048 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:36.477058 | orchestrator | 2026-03-03 00:33:36.477069 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-03 00:33:36.477080 | orchestrator | Tuesday 03 March 2026 00:33:31 +0000 (0:00:01.309) 0:00:19.959 ********* 2026-03-03 00:33:36.477091 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477102 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477113 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.477123 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.477134 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.477144 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.477155 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.477166 | orchestrator | 2026-03-03 00:33:36.477176 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-03 00:33:36.477188 | orchestrator | Tuesday 03 March 2026 00:33:31 +0000 (0:00:00.199) 0:00:20.159 ********* 2026-03-03 00:33:36.477198 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477209 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477220 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.477231 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.477241 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.477252 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.477262 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.477275 | orchestrator | 2026-03-03 00:33:36.477294 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-03 00:33:36.477312 | orchestrator | Tuesday 03 March 2026 00:33:31 +0000 (0:00:00.223) 0:00:20.382 ********* 2026-03-03 00:33:36.477329 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477349 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477367 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.477385 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.477401 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.477412 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.477422 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.477433 | orchestrator | 2026-03-03 00:33:36.477443 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-03 00:33:36.477454 | orchestrator | Tuesday 03 March 2026 00:33:31 +0000 (0:00:00.207) 0:00:20.590 ********* 2026-03-03 00:33:36.477466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:33:36.477478 | orchestrator | 2026-03-03 00:33:36.477489 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-03 00:33:36.477500 | orchestrator | Tuesday 03 March 2026 00:33:32 +0000 (0:00:00.243) 0:00:20.833 ********* 2026-03-03 00:33:36.477511 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477521 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.477532 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.477542 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477553 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.477564 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.477574 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.477585 | orchestrator | 2026-03-03 00:33:36.477595 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-03 00:33:36.477606 | orchestrator | Tuesday 03 March 2026 00:33:32 +0000 (0:00:00.535) 0:00:21.369 ********* 2026-03-03 00:33:36.477617 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:33:36.477636 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:33:36.477647 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:33:36.477658 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:33:36.477669 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:33:36.477679 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:33:36.477690 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:33:36.477700 | orchestrator | 2026-03-03 00:33:36.477711 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-03 00:33:36.477722 | orchestrator | Tuesday 03 March 2026 00:33:32 +0000 (0:00:00.213) 0:00:21.583 ********* 2026-03-03 00:33:36.477733 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.477743 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477754 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.477765 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:33:36.477775 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:33:36.477813 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:33:36.477824 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477835 | orchestrator | 2026-03-03 00:33:36.477846 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-03 00:33:36.477858 | orchestrator | Tuesday 03 March 2026 00:33:34 +0000 (0:00:01.871) 0:00:23.454 ********* 2026-03-03 00:33:36.477868 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477879 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.477890 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.477901 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477911 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:33:36.477922 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:33:36.477933 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:33:36.477943 | orchestrator | 2026-03-03 00:33:36.477954 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-03 00:33:36.477965 | orchestrator | Tuesday 03 March 2026 00:33:35 +0000 (0:00:00.575) 0:00:24.030 ********* 2026-03-03 00:33:36.477976 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:33:36.477987 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:33:36.477997 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:33:36.478008 | orchestrator | ok: [testbed-manager] 2026-03-03 00:33:36.478090 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:34:18.375469 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:34:18.375610 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:34:18.375636 | orchestrator | 2026-03-03 00:34:18.375654 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-03 00:34:18.375676 | orchestrator | Tuesday 03 March 2026 00:33:37 +0000 (0:00:02.155) 0:00:26.185 ********* 2026-03-03 00:34:18.375818 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.375842 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.375862 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.375880 | orchestrator | changed: [testbed-manager] 2026-03-03 00:34:18.375899 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:34:18.375917 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:34:18.375936 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:34:18.375956 | orchestrator | 2026-03-03 00:34:18.375975 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-03 00:34:18.375995 | orchestrator | Tuesday 03 March 2026 00:33:55 +0000 (0:00:17.502) 0:00:43.688 ********* 2026-03-03 00:34:18.376014 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.376033 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.376052 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.376072 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.376090 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.376108 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.376126 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.376145 | orchestrator | 2026-03-03 00:34:18.376165 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-03 00:34:18.376185 | orchestrator | Tuesday 03 March 2026 00:33:55 +0000 (0:00:00.213) 0:00:43.902 ********* 2026-03-03 00:34:18.376239 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.376262 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.376280 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.376298 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.376317 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.376334 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.376352 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.376370 | orchestrator | 2026-03-03 00:34:18.376389 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-03 00:34:18.376408 | orchestrator | Tuesday 03 March 2026 00:33:55 +0000 (0:00:00.201) 0:00:44.104 ********* 2026-03-03 00:34:18.376427 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.376445 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.376463 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.376481 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.376499 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.376519 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.376537 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.376556 | orchestrator | 2026-03-03 00:34:18.376573 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-03 00:34:18.376591 | orchestrator | Tuesday 03 March 2026 00:33:55 +0000 (0:00:00.221) 0:00:44.325 ********* 2026-03-03 00:34:18.376609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:34:18.376632 | orchestrator | 2026-03-03 00:34:18.376652 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-03 00:34:18.376669 | orchestrator | Tuesday 03 March 2026 00:33:56 +0000 (0:00:00.310) 0:00:44.635 ********* 2026-03-03 00:34:18.376709 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.376729 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.376746 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.376765 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.376806 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.376826 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.376844 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.376863 | orchestrator | 2026-03-03 00:34:18.376881 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-03 00:34:18.376918 | orchestrator | Tuesday 03 March 2026 00:33:58 +0000 (0:00:01.972) 0:00:46.608 ********* 2026-03-03 00:34:18.376950 | orchestrator | changed: [testbed-manager] 2026-03-03 00:34:18.376970 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:34:18.376989 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:34:18.377007 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:34:18.377026 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:34:18.377043 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:34:18.377092 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:34:18.377111 | orchestrator | 2026-03-03 00:34:18.377130 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-03 00:34:18.377143 | orchestrator | Tuesday 03 March 2026 00:33:59 +0000 (0:00:01.136) 0:00:47.744 ********* 2026-03-03 00:34:18.377154 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.377165 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.377175 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.377186 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.377197 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.377208 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.377218 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.377229 | orchestrator | 2026-03-03 00:34:18.377241 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-03 00:34:18.377260 | orchestrator | Tuesday 03 March 2026 00:34:00 +0000 (0:00:00.877) 0:00:48.621 ********* 2026-03-03 00:34:18.377288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:34:18.377346 | orchestrator | 2026-03-03 00:34:18.377367 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-03 00:34:18.377385 | orchestrator | Tuesday 03 March 2026 00:34:00 +0000 (0:00:00.269) 0:00:48.891 ********* 2026-03-03 00:34:18.377402 | orchestrator | changed: [testbed-manager] 2026-03-03 00:34:18.377418 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:34:18.377434 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:34:18.377451 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:34:18.377468 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:34:18.377485 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:34:18.377501 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:34:18.377518 | orchestrator | 2026-03-03 00:34:18.377552 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-03 00:34:18.377563 | orchestrator | Tuesday 03 March 2026 00:34:01 +0000 (0:00:01.100) 0:00:49.991 ********* 2026-03-03 00:34:18.377573 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:34:18.377583 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:34:18.377592 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:34:18.377602 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:34:18.377611 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:34:18.377621 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:34:18.377631 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:34:18.377640 | orchestrator | 2026-03-03 00:34:18.377650 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-03 00:34:18.377660 | orchestrator | Tuesday 03 March 2026 00:34:01 +0000 (0:00:00.225) 0:00:50.217 ********* 2026-03-03 00:34:18.377670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:34:18.377680 | orchestrator | 2026-03-03 00:34:18.377721 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-03 00:34:18.377735 | orchestrator | Tuesday 03 March 2026 00:34:01 +0000 (0:00:00.306) 0:00:50.524 ********* 2026-03-03 00:34:18.377745 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.377755 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.377764 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.377774 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.377783 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.377792 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.377802 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.377811 | orchestrator | 2026-03-03 00:34:18.377821 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-03 00:34:18.377831 | orchestrator | Tuesday 03 March 2026 00:34:03 +0000 (0:00:01.882) 0:00:52.406 ********* 2026-03-03 00:34:18.377840 | orchestrator | changed: [testbed-manager] 2026-03-03 00:34:18.377850 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:34:18.377860 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:34:18.377869 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:34:18.377879 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:34:18.377888 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:34:18.377898 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:34:18.377907 | orchestrator | 2026-03-03 00:34:18.377917 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-03 00:34:18.377926 | orchestrator | Tuesday 03 March 2026 00:34:05 +0000 (0:00:01.252) 0:00:53.659 ********* 2026-03-03 00:34:18.377937 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:34:18.377954 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:34:18.377970 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:34:18.377986 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:34:18.378002 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:34:18.378087 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:34:18.378120 | orchestrator | changed: [testbed-manager] 2026-03-03 00:34:18.378135 | orchestrator | 2026-03-03 00:34:18.378149 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-03 00:34:18.378164 | orchestrator | Tuesday 03 March 2026 00:34:15 +0000 (0:00:10.433) 0:01:04.092 ********* 2026-03-03 00:34:18.378178 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.378192 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.378206 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.378221 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.378236 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.378250 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.378264 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.378278 | orchestrator | 2026-03-03 00:34:18.378293 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-03 00:34:18.378308 | orchestrator | Tuesday 03 March 2026 00:34:16 +0000 (0:00:01.152) 0:01:05.245 ********* 2026-03-03 00:34:18.378324 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.378339 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.378354 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.378370 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.378386 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.378401 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.378416 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.378431 | orchestrator | 2026-03-03 00:34:18.378447 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-03 00:34:18.378462 | orchestrator | Tuesday 03 March 2026 00:34:17 +0000 (0:00:01.044) 0:01:06.290 ********* 2026-03-03 00:34:18.378477 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.378492 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.378508 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.378522 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.378537 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.378552 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.378567 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.378583 | orchestrator | 2026-03-03 00:34:18.378597 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-03 00:34:18.378614 | orchestrator | Tuesday 03 March 2026 00:34:17 +0000 (0:00:00.197) 0:01:06.487 ********* 2026-03-03 00:34:18.378628 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:34:18.378642 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:34:18.378656 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:34:18.378706 | orchestrator | ok: [testbed-manager] 2026-03-03 00:34:18.378726 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:34:18.378741 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:34:18.378758 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:34:18.378773 | orchestrator | 2026-03-03 00:34:18.378789 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-03 00:34:18.378804 | orchestrator | Tuesday 03 March 2026 00:34:18 +0000 (0:00:00.204) 0:01:06.691 ********* 2026-03-03 00:34:18.378821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:34:18.378838 | orchestrator | 2026-03-03 00:34:18.378876 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-03 00:36:45.773923 | orchestrator | Tuesday 03 March 2026 00:34:18 +0000 (0:00:00.264) 0:01:06.955 ********* 2026-03-03 00:36:45.774057 | orchestrator | ok: [testbed-manager] 2026-03-03 00:36:45.774073 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774081 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774088 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774096 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774103 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774111 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774118 | orchestrator | 2026-03-03 00:36:45.774126 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-03 00:36:45.774158 | orchestrator | Tuesday 03 March 2026 00:34:20 +0000 (0:00:01.735) 0:01:08.691 ********* 2026-03-03 00:36:45.774166 | orchestrator | changed: [testbed-manager] 2026-03-03 00:36:45.774174 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:36:45.774182 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:36:45.774189 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:36:45.774196 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:36:45.774203 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:36:45.774211 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:36:45.774218 | orchestrator | 2026-03-03 00:36:45.774226 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-03 00:36:45.774234 | orchestrator | Tuesday 03 March 2026 00:34:20 +0000 (0:00:00.661) 0:01:09.352 ********* 2026-03-03 00:36:45.774242 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774249 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774256 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774263 | orchestrator | ok: [testbed-manager] 2026-03-03 00:36:45.774271 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774278 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774285 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774292 | orchestrator | 2026-03-03 00:36:45.774300 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-03 00:36:45.774307 | orchestrator | Tuesday 03 March 2026 00:34:20 +0000 (0:00:00.206) 0:01:09.559 ********* 2026-03-03 00:36:45.774314 | orchestrator | ok: [testbed-manager] 2026-03-03 00:36:45.774322 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774329 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774336 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774343 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774350 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774358 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774365 | orchestrator | 2026-03-03 00:36:45.774407 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-03 00:36:45.774414 | orchestrator | Tuesday 03 March 2026 00:34:22 +0000 (0:00:01.225) 0:01:10.785 ********* 2026-03-03 00:36:45.774421 | orchestrator | changed: [testbed-manager] 2026-03-03 00:36:45.774429 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:36:45.774436 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:36:45.774443 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:36:45.774451 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:36:45.774458 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:36:45.774465 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:36:45.774472 | orchestrator | 2026-03-03 00:36:45.774479 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-03 00:36:45.774487 | orchestrator | Tuesday 03 March 2026 00:34:23 +0000 (0:00:01.779) 0:01:12.564 ********* 2026-03-03 00:36:45.774494 | orchestrator | ok: [testbed-manager] 2026-03-03 00:36:45.774501 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774509 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774516 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774523 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774530 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774538 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774545 | orchestrator | 2026-03-03 00:36:45.774552 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-03 00:36:45.774559 | orchestrator | Tuesday 03 March 2026 00:34:26 +0000 (0:00:02.765) 0:01:15.330 ********* 2026-03-03 00:36:45.774567 | orchestrator | ok: [testbed-manager] 2026-03-03 00:36:45.774574 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774581 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774588 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774595 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774603 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774610 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774623 | orchestrator | 2026-03-03 00:36:45.774630 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-03 00:36:45.774638 | orchestrator | Tuesday 03 March 2026 00:35:02 +0000 (0:00:35.823) 0:01:51.153 ********* 2026-03-03 00:36:45.774645 | orchestrator | changed: [testbed-manager] 2026-03-03 00:36:45.774652 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:36:45.774660 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:36:45.774667 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:36:45.774674 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:36:45.774681 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:36:45.774688 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:36:45.774695 | orchestrator | 2026-03-03 00:36:45.774703 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-03 00:36:45.774710 | orchestrator | Tuesday 03 March 2026 00:36:30 +0000 (0:01:28.429) 0:03:19.583 ********* 2026-03-03 00:36:45.774717 | orchestrator | ok: [testbed-manager] 2026-03-03 00:36:45.774724 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774732 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774739 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774746 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774754 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774761 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774768 | orchestrator | 2026-03-03 00:36:45.774776 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-03 00:36:45.774783 | orchestrator | Tuesday 03 March 2026 00:36:33 +0000 (0:00:02.219) 0:03:21.803 ********* 2026-03-03 00:36:45.774791 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:36:45.774798 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:36:45.774805 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:36:45.774812 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:36:45.774820 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:36:45.774827 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:36:45.774834 | orchestrator | changed: [testbed-manager] 2026-03-03 00:36:45.774841 | orchestrator | 2026-03-03 00:36:45.774849 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-03 00:36:45.774856 | orchestrator | Tuesday 03 March 2026 00:36:43 +0000 (0:00:10.347) 0:03:32.151 ********* 2026-03-03 00:36:45.774891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-03 00:36:45.774909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-03 00:36:45.774919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-03 00:36:45.774928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-03 00:36:45.774941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-03 00:36:45.774952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-03 00:36:45.774960 | orchestrator | 2026-03-03 00:36:45.774967 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-03 00:36:45.774975 | orchestrator | Tuesday 03 March 2026 00:36:43 +0000 (0:00:00.357) 0:03:32.508 ********* 2026-03-03 00:36:45.774982 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-03 00:36:45.774990 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:36:45.774997 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-03 00:36:45.775005 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-03 00:36:45.775012 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:36:45.775019 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-03 00:36:45.775026 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:36:45.775034 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:36:45.775041 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-03 00:36:45.775058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-03 00:36:45.775065 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-03 00:36:45.775073 | orchestrator | 2026-03-03 00:36:45.775080 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-03 00:36:45.775091 | orchestrator | Tuesday 03 March 2026 00:36:45 +0000 (0:00:01.761) 0:03:34.269 ********* 2026-03-03 00:36:45.775099 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-03 00:36:45.775108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-03 00:36:45.775115 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-03 00:36:45.775123 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-03 00:36:45.775130 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-03 00:36:45.775142 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-03 00:36:52.986085 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-03 00:36:52.986210 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-03 00:36:52.986233 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-03 00:36:52.986247 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-03 00:36:52.986262 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-03 00:36:52.986277 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-03 00:36:52.986290 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-03 00:36:52.986329 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-03 00:36:52.986344 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-03 00:36:52.986418 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-03 00:36:52.986435 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-03 00:36:52.986450 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-03 00:36:52.986465 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-03 00:36:52.986479 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-03 00:36:52.986495 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-03 00:36:52.986504 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-03 00:36:52.986513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-03 00:36:52.986522 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-03 00:36:52.986531 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-03 00:36:52.986539 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-03 00:36:52.986548 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-03 00:36:52.986556 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-03 00:36:52.986565 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-03 00:36:52.986576 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-03 00:36:52.986586 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-03 00:36:52.986596 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-03 00:36:52.986606 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-03 00:36:52.986616 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-03 00:36:52.986626 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-03 00:36:52.986635 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-03 00:36:52.986646 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-03 00:36:52.986655 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-03 00:36:52.986665 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-03 00:36:52.986676 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-03 00:36:52.986686 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:36:52.986714 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:36:52.986723 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:36:52.986731 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:36:52.986740 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-03 00:36:52.986748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-03 00:36:52.986758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-03 00:36:52.986774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-03 00:36:52.986783 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-03 00:36:52.986810 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-03 00:36:52.986820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-03 00:36:52.986828 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-03 00:36:52.986837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-03 00:36:52.986846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-03 00:36:52.986854 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-03 00:36:52.986863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-03 00:36:52.986872 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-03 00:36:52.986880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-03 00:36:52.986889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-03 00:36:52.986897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-03 00:36:52.986906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-03 00:36:52.986919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-03 00:36:52.986933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-03 00:36:52.986945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-03 00:36:52.986969 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-03 00:36:52.986983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-03 00:36:52.986998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-03 00:36:52.987011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-03 00:36:52.987025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-03 00:36:52.987039 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-03 00:36:52.987052 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-03 00:36:52.987066 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-03 00:36:52.987080 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-03 00:36:52.987095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-03 00:36:52.987110 | orchestrator | 2026-03-03 00:36:52.987126 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-03 00:36:52.987141 | orchestrator | Tuesday 03 March 2026 00:36:51 +0000 (0:00:06.081) 0:03:40.351 ********* 2026-03-03 00:36:52.987155 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987171 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987179 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987188 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-03 00:36:52.987232 | orchestrator | 2026-03-03 00:36:52.987241 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-03 00:36:52.987256 | orchestrator | Tuesday 03 March 2026 00:36:52 +0000 (0:00:00.687) 0:03:41.039 ********* 2026-03-03 00:36:52.987275 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:36:52.987292 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:36:52.987314 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:36:52.987329 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:36:52.987345 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:36:52.987386 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:36:52.987403 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:36:52.987417 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:36:52.987432 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-03 00:36:52.987446 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-03 00:36:52.987477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-03 00:37:05.982192 | orchestrator | 2026-03-03 00:37:05.982304 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-03 00:37:05.982321 | orchestrator | Tuesday 03 March 2026 00:36:53 +0000 (0:00:00.559) 0:03:41.599 ********* 2026-03-03 00:37:05.982381 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:37:05.982395 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:37:05.982408 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:37:05.982420 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:37:05.982431 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:37:05.982441 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-03 00:37:05.982452 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:37:05.982463 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:37:05.982474 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-03 00:37:05.982485 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-03 00:37:05.982495 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-03 00:37:05.982506 | orchestrator | 2026-03-03 00:37:05.982517 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-03 00:37:05.982528 | orchestrator | Tuesday 03 March 2026 00:36:53 +0000 (0:00:00.623) 0:03:42.222 ********* 2026-03-03 00:37:05.982539 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-03 00:37:05.982550 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-03 00:37:05.982561 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:37:05.982571 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:37:05.982582 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-03 00:37:05.982620 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:37:05.982632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-03 00:37:05.982645 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:37:05.982658 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-03 00:37:05.982671 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-03 00:37:05.982683 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-03 00:37:05.982695 | orchestrator | 2026-03-03 00:37:05.982708 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-03 00:37:05.982721 | orchestrator | Tuesday 03 March 2026 00:36:54 +0000 (0:00:00.584) 0:03:42.806 ********* 2026-03-03 00:37:05.982733 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:37:05.982746 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:37:05.982759 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:37:05.982772 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:37:05.982785 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:37:05.982797 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:37:05.982809 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:37:05.982821 | orchestrator | 2026-03-03 00:37:05.982834 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-03 00:37:05.982846 | orchestrator | Tuesday 03 March 2026 00:36:54 +0000 (0:00:00.294) 0:03:43.101 ********* 2026-03-03 00:37:05.982860 | orchestrator | ok: [testbed-manager] 2026-03-03 00:37:05.982873 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:37:05.982886 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:37:05.982898 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:37:05.982910 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:37:05.982922 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:37:05.982935 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:37:05.982947 | orchestrator | 2026-03-03 00:37:05.982959 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-03 00:37:05.982972 | orchestrator | Tuesday 03 March 2026 00:37:00 +0000 (0:00:05.921) 0:03:49.022 ********* 2026-03-03 00:37:05.982984 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-03 00:37:05.982998 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-03 00:37:05.983011 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:37:05.983022 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-03 00:37:05.983033 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:37:05.983044 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-03 00:37:05.983054 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:37:05.983065 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-03 00:37:05.983075 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:37:05.983086 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-03 00:37:05.983097 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:37:05.983108 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:37:05.983118 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-03 00:37:05.983129 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:37:05.983140 | orchestrator | 2026-03-03 00:37:05.983150 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-03 00:37:05.983161 | orchestrator | Tuesday 03 March 2026 00:37:00 +0000 (0:00:00.292) 0:03:49.314 ********* 2026-03-03 00:37:05.983172 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-03 00:37:05.983183 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-03 00:37:05.983193 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-03 00:37:05.983222 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-03 00:37:05.983234 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-03 00:37:05.983245 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-03 00:37:05.983263 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-03 00:37:05.983274 | orchestrator | 2026-03-03 00:37:05.983285 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-03 00:37:05.983296 | orchestrator | Tuesday 03 March 2026 00:37:01 +0000 (0:00:01.052) 0:03:50.367 ********* 2026-03-03 00:37:05.983308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:37:05.983321 | orchestrator | 2026-03-03 00:37:05.983351 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-03 00:37:05.983363 | orchestrator | Tuesday 03 March 2026 00:37:02 +0000 (0:00:00.407) 0:03:50.774 ********* 2026-03-03 00:37:05.983373 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:37:05.983384 | orchestrator | ok: [testbed-manager] 2026-03-03 00:37:05.983395 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:37:05.983406 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:37:05.983417 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:37:05.983428 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:37:05.983438 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:37:05.983449 | orchestrator | 2026-03-03 00:37:05.983460 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-03 00:37:05.983471 | orchestrator | Tuesday 03 March 2026 00:37:03 +0000 (0:00:01.432) 0:03:52.207 ********* 2026-03-03 00:37:05.983482 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:37:05.983492 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:37:05.983503 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:37:05.983514 | orchestrator | ok: [testbed-manager] 2026-03-03 00:37:05.983524 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:37:05.983535 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:37:05.983545 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:37:05.983556 | orchestrator | 2026-03-03 00:37:05.983567 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-03 00:37:05.983578 | orchestrator | Tuesday 03 March 2026 00:37:04 +0000 (0:00:00.625) 0:03:52.833 ********* 2026-03-03 00:37:05.983589 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:37:05.983617 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:37:05.983629 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:37:05.983640 | orchestrator | changed: [testbed-manager] 2026-03-03 00:37:05.983651 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:37:05.983662 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:37:05.983672 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:37:05.983683 | orchestrator | 2026-03-03 00:37:05.983694 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-03 00:37:05.983705 | orchestrator | Tuesday 03 March 2026 00:37:04 +0000 (0:00:00.621) 0:03:53.454 ********* 2026-03-03 00:37:05.983716 | orchestrator | ok: [testbed-manager] 2026-03-03 00:37:05.983727 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:37:05.983737 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:37:05.983748 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:37:05.983759 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:37:05.983769 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:37:05.983780 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:37:05.983791 | orchestrator | 2026-03-03 00:37:05.983802 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-03 00:37:05.983813 | orchestrator | Tuesday 03 March 2026 00:37:05 +0000 (0:00:00.592) 0:03:54.047 ********* 2026-03-03 00:37:05.983828 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496311.5703356, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:05.983853 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496317.1076136, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:05.983866 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496325.0143735, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:05.983901 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496327.2169445, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.126831 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496331.175253, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.126951 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496326.1058362, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.126975 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772496330.740867, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.126993 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127039 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127076 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127093 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127137 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127155 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127172 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 00:37:11.127190 | orchestrator | 2026-03-03 00:37:11.127210 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-03 00:37:11.127228 | orchestrator | Tuesday 03 March 2026 00:37:06 +0000 (0:00:00.993) 0:03:55.041 ********* 2026-03-03 00:37:11.127245 | orchestrator | changed: [testbed-manager] 2026-03-03 00:37:11.127264 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:37:11.127294 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:37:11.127382 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:37:11.127395 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:37:11.127406 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:37:11.127417 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:37:11.127428 | orchestrator | 2026-03-03 00:37:11.127440 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-03 00:37:11.127451 | orchestrator | Tuesday 03 March 2026 00:37:07 +0000 (0:00:01.120) 0:03:56.161 ********* 2026-03-03 00:37:11.127462 | orchestrator | changed: [testbed-manager] 2026-03-03 00:37:11.127474 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:37:11.127485 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:37:11.127496 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:37:11.127507 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:37:11.127518 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:37:11.127529 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:37:11.127541 | orchestrator | 2026-03-03 00:37:11.127552 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-03 00:37:11.127563 | orchestrator | Tuesday 03 March 2026 00:37:08 +0000 (0:00:01.194) 0:03:57.356 ********* 2026-03-03 00:37:11.127574 | orchestrator | changed: [testbed-manager] 2026-03-03 00:37:11.127584 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:37:11.127595 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:37:11.127606 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:37:11.127617 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:37:11.127627 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:37:11.127639 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:37:11.127650 | orchestrator | 2026-03-03 00:37:11.127661 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-03 00:37:11.127681 | orchestrator | Tuesday 03 March 2026 00:37:09 +0000 (0:00:01.155) 0:03:58.511 ********* 2026-03-03 00:37:11.127693 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:37:11.127705 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:37:11.127715 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:37:11.127726 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:37:11.127737 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:37:11.127748 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:37:11.127759 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:37:11.127770 | orchestrator | 2026-03-03 00:37:11.127782 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-03 00:37:11.127793 | orchestrator | Tuesday 03 March 2026 00:37:10 +0000 (0:00:00.224) 0:03:58.736 ********* 2026-03-03 00:37:11.127803 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:37:11.127814 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:37:11.127823 | orchestrator | ok: [testbed-manager] 2026-03-03 00:37:11.127833 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:37:11.127842 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:37:11.127852 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:37:11.127861 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:37:11.127870 | orchestrator | 2026-03-03 00:37:11.127880 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-03 00:37:11.127890 | orchestrator | Tuesday 03 March 2026 00:37:10 +0000 (0:00:00.655) 0:03:59.392 ********* 2026-03-03 00:37:11.127902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:37:11.127913 | orchestrator | 2026-03-03 00:37:11.127923 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-03 00:37:11.127943 | orchestrator | Tuesday 03 March 2026 00:37:11 +0000 (0:00:00.321) 0:03:59.713 ********* 2026-03-03 00:38:31.196543 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.196655 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:31.196672 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:31.196711 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:31.196722 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:31.196733 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:31.196744 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:31.196757 | orchestrator | 2026-03-03 00:38:31.196770 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-03 00:38:31.196782 | orchestrator | Tuesday 03 March 2026 00:37:20 +0000 (0:00:09.102) 0:04:08.816 ********* 2026-03-03 00:38:31.196793 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.196804 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.196815 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.196826 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.196837 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.196848 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.196858 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.196869 | orchestrator | 2026-03-03 00:38:31.196881 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-03 00:38:31.196892 | orchestrator | Tuesday 03 March 2026 00:37:21 +0000 (0:00:01.426) 0:04:10.243 ********* 2026-03-03 00:38:31.196903 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.196914 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.196925 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.196935 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.196946 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.196957 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.196968 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.196979 | orchestrator | 2026-03-03 00:38:31.196989 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-03 00:38:31.197001 | orchestrator | Tuesday 03 March 2026 00:37:22 +0000 (0:00:01.021) 0:04:11.264 ********* 2026-03-03 00:38:31.197012 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.197022 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.197033 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.197044 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.197054 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.197065 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.197078 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.197091 | orchestrator | 2026-03-03 00:38:31.197104 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-03 00:38:31.197117 | orchestrator | Tuesday 03 March 2026 00:37:22 +0000 (0:00:00.315) 0:04:11.580 ********* 2026-03-03 00:38:31.197130 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.197142 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.197155 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.197167 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.197216 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.197229 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.197242 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.197255 | orchestrator | 2026-03-03 00:38:31.197267 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-03 00:38:31.197279 | orchestrator | Tuesday 03 March 2026 00:37:23 +0000 (0:00:00.300) 0:04:11.880 ********* 2026-03-03 00:38:31.197291 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.197304 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.197316 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.197326 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.197337 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.197347 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.197357 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.197368 | orchestrator | 2026-03-03 00:38:31.197379 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-03 00:38:31.197390 | orchestrator | Tuesday 03 March 2026 00:37:23 +0000 (0:00:00.275) 0:04:12.155 ********* 2026-03-03 00:38:31.197400 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.197411 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.197421 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.197440 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.197451 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.197461 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.197472 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.197482 | orchestrator | 2026-03-03 00:38:31.197493 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-03 00:38:31.197504 | orchestrator | Tuesday 03 March 2026 00:37:29 +0000 (0:00:05.716) 0:04:17.872 ********* 2026-03-03 00:38:31.197517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:38:31.197530 | orchestrator | 2026-03-03 00:38:31.197542 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-03 00:38:31.197553 | orchestrator | Tuesday 03 March 2026 00:37:29 +0000 (0:00:00.346) 0:04:18.218 ********* 2026-03-03 00:38:31.197563 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197574 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-03 00:38:31.197585 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197596 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-03 00:38:31.197607 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:31.197618 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:31.197628 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197639 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-03 00:38:31.197650 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197661 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:31.197671 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-03 00:38:31.197682 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197693 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-03 00:38:31.197704 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:31.197714 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197725 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-03 00:38:31.197752 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:31.197764 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:31.197775 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-03 00:38:31.197786 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-03 00:38:31.197797 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:31.197808 | orchestrator | 2026-03-03 00:38:31.197819 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-03 00:38:31.197830 | orchestrator | Tuesday 03 March 2026 00:37:29 +0000 (0:00:00.285) 0:04:18.504 ********* 2026-03-03 00:38:31.197841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:38:31.197852 | orchestrator | 2026-03-03 00:38:31.197863 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-03 00:38:31.197874 | orchestrator | Tuesday 03 March 2026 00:37:30 +0000 (0:00:00.307) 0:04:18.811 ********* 2026-03-03 00:38:31.197884 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-03 00:38:31.197895 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:31.197906 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-03 00:38:31.197917 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-03 00:38:31.197927 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:31.197938 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:31.197956 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-03 00:38:31.197967 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-03 00:38:31.197978 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:31.198007 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-03 00:38:31.198076 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:31.198089 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:31.198100 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-03 00:38:31.198111 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:31.198122 | orchestrator | 2026-03-03 00:38:31.198133 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-03 00:38:31.198144 | orchestrator | Tuesday 03 March 2026 00:37:30 +0000 (0:00:00.300) 0:04:19.111 ********* 2026-03-03 00:38:31.198155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:38:31.198166 | orchestrator | 2026-03-03 00:38:31.198206 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-03 00:38:31.198218 | orchestrator | Tuesday 03 March 2026 00:37:30 +0000 (0:00:00.349) 0:04:19.461 ********* 2026-03-03 00:38:31.198229 | orchestrator | changed: [testbed-manager] 2026-03-03 00:38:31.198240 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:31.198251 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:31.198262 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:31.198272 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:31.198283 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:31.198294 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:31.198305 | orchestrator | 2026-03-03 00:38:31.198316 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-03 00:38:31.198327 | orchestrator | Tuesday 03 March 2026 00:38:06 +0000 (0:00:36.027) 0:04:55.488 ********* 2026-03-03 00:38:31.198337 | orchestrator | changed: [testbed-manager] 2026-03-03 00:38:31.198348 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:31.198359 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:31.198370 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:31.198380 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:31.198391 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:31.198408 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:31.198419 | orchestrator | 2026-03-03 00:38:31.198429 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-03 00:38:31.198440 | orchestrator | Tuesday 03 March 2026 00:38:15 +0000 (0:00:08.501) 0:05:03.990 ********* 2026-03-03 00:38:31.198451 | orchestrator | changed: [testbed-manager] 2026-03-03 00:38:31.198462 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:31.198473 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:31.198483 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:31.198494 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:31.198505 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:31.198515 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:31.198526 | orchestrator | 2026-03-03 00:38:31.198537 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-03 00:38:31.198548 | orchestrator | Tuesday 03 March 2026 00:38:23 +0000 (0:00:07.716) 0:05:11.706 ********* 2026-03-03 00:38:31.198559 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:31.198569 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:31.198580 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:31.198591 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:31.198601 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:31.198612 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:31.198623 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:31.198633 | orchestrator | 2026-03-03 00:38:31.198644 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-03 00:38:31.198663 | orchestrator | Tuesday 03 March 2026 00:38:24 +0000 (0:00:01.726) 0:05:13.433 ********* 2026-03-03 00:38:31.198674 | orchestrator | changed: [testbed-manager] 2026-03-03 00:38:31.198685 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:31.198695 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:31.198706 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:31.198717 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:31.198728 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:31.198739 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:31.198750 | orchestrator | 2026-03-03 00:38:31.198769 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-03 00:38:41.963607 | orchestrator | Tuesday 03 March 2026 00:38:31 +0000 (0:00:06.348) 0:05:19.781 ********* 2026-03-03 00:38:41.963719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:38:41.963738 | orchestrator | 2026-03-03 00:38:41.963751 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-03 00:38:41.963763 | orchestrator | Tuesday 03 March 2026 00:38:31 +0000 (0:00:00.340) 0:05:20.122 ********* 2026-03-03 00:38:41.963774 | orchestrator | changed: [testbed-manager] 2026-03-03 00:38:41.963787 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:41.963798 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:41.963809 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:41.963820 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:41.963831 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:41.963842 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:41.963852 | orchestrator | 2026-03-03 00:38:41.963864 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-03 00:38:41.963875 | orchestrator | Tuesday 03 March 2026 00:38:32 +0000 (0:00:00.652) 0:05:20.775 ********* 2026-03-03 00:38:41.963886 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:41.963898 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:41.963909 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:41.963920 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:41.963931 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:41.963942 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:41.963952 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:41.963963 | orchestrator | 2026-03-03 00:38:41.963974 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-03 00:38:41.963985 | orchestrator | Tuesday 03 March 2026 00:38:34 +0000 (0:00:01.852) 0:05:22.628 ********* 2026-03-03 00:38:41.963996 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:38:41.964007 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:38:41.964018 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:38:41.964028 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:38:41.964039 | orchestrator | changed: [testbed-manager] 2026-03-03 00:38:41.964050 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:38:41.964061 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:38:41.964072 | orchestrator | 2026-03-03 00:38:41.964083 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-03 00:38:41.964094 | orchestrator | Tuesday 03 March 2026 00:38:34 +0000 (0:00:00.802) 0:05:23.430 ********* 2026-03-03 00:38:41.964105 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:41.964116 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:41.964129 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:41.964141 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:41.964153 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:41.964224 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:41.964237 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:41.964249 | orchestrator | 2026-03-03 00:38:41.964262 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-03 00:38:41.964295 | orchestrator | Tuesday 03 March 2026 00:38:35 +0000 (0:00:00.283) 0:05:23.713 ********* 2026-03-03 00:38:41.964308 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:41.964321 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:41.964333 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:41.964345 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:41.964358 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:41.964369 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:41.964382 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:41.964394 | orchestrator | 2026-03-03 00:38:41.964407 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-03 00:38:41.964420 | orchestrator | Tuesday 03 March 2026 00:38:35 +0000 (0:00:00.352) 0:05:24.065 ********* 2026-03-03 00:38:41.964433 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:41.964447 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:41.964459 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:41.964470 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:41.964484 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:41.964513 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:41.964524 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:41.964535 | orchestrator | 2026-03-03 00:38:41.964546 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-03 00:38:41.964557 | orchestrator | Tuesday 03 March 2026 00:38:35 +0000 (0:00:00.291) 0:05:24.357 ********* 2026-03-03 00:38:41.964568 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:41.964579 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:41.964589 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:41.964600 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:41.964611 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:41.964622 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:41.964632 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:41.964643 | orchestrator | 2026-03-03 00:38:41.964654 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-03 00:38:41.964666 | orchestrator | Tuesday 03 March 2026 00:38:36 +0000 (0:00:00.250) 0:05:24.607 ********* 2026-03-03 00:38:41.964676 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:41.964687 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:41.964698 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:41.964709 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:41.964719 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:41.964730 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:41.964741 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:41.964751 | orchestrator | 2026-03-03 00:38:41.964762 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-03 00:38:41.964773 | orchestrator | Tuesday 03 March 2026 00:38:36 +0000 (0:00:00.295) 0:05:24.902 ********* 2026-03-03 00:38:41.964784 | orchestrator | ok: [testbed-node-3] =>  2026-03-03 00:38:41.964795 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964806 | orchestrator | ok: [testbed-node-4] =>  2026-03-03 00:38:41.964816 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964827 | orchestrator | ok: [testbed-node-5] =>  2026-03-03 00:38:41.964838 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964849 | orchestrator | ok: [testbed-manager] =>  2026-03-03 00:38:41.964859 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964888 | orchestrator | ok: [testbed-node-0] =>  2026-03-03 00:38:41.964900 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964911 | orchestrator | ok: [testbed-node-1] =>  2026-03-03 00:38:41.964922 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964932 | orchestrator | ok: [testbed-node-2] =>  2026-03-03 00:38:41.964943 | orchestrator |  docker_version: 5:27.5.1 2026-03-03 00:38:41.964954 | orchestrator | 2026-03-03 00:38:41.964964 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-03 00:38:41.964975 | orchestrator | Tuesday 03 March 2026 00:38:36 +0000 (0:00:00.251) 0:05:25.154 ********* 2026-03-03 00:38:41.964993 | orchestrator | ok: [testbed-node-3] =>  2026-03-03 00:38:41.965004 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965015 | orchestrator | ok: [testbed-node-4] =>  2026-03-03 00:38:41.965025 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965036 | orchestrator | ok: [testbed-node-5] =>  2026-03-03 00:38:41.965047 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965057 | orchestrator | ok: [testbed-manager] =>  2026-03-03 00:38:41.965068 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965079 | orchestrator | ok: [testbed-node-0] =>  2026-03-03 00:38:41.965089 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965100 | orchestrator | ok: [testbed-node-1] =>  2026-03-03 00:38:41.965113 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965131 | orchestrator | ok: [testbed-node-2] =>  2026-03-03 00:38:41.965194 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-03 00:38:41.965214 | orchestrator | 2026-03-03 00:38:41.965231 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-03 00:38:41.965247 | orchestrator | Tuesday 03 March 2026 00:38:36 +0000 (0:00:00.267) 0:05:25.421 ********* 2026-03-03 00:38:41.965266 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:41.965284 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:41.965301 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:41.965317 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:41.965334 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:41.965352 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:41.965370 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:41.965388 | orchestrator | 2026-03-03 00:38:41.965405 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-03 00:38:41.965422 | orchestrator | Tuesday 03 March 2026 00:38:37 +0000 (0:00:00.259) 0:05:25.680 ********* 2026-03-03 00:38:41.965440 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:41.965459 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:41.965478 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:41.965496 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:38:41.965515 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:38:41.965534 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:38:41.965552 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:38:41.965565 | orchestrator | 2026-03-03 00:38:41.965576 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-03 00:38:41.965586 | orchestrator | Tuesday 03 March 2026 00:38:37 +0000 (0:00:00.347) 0:05:26.028 ********* 2026-03-03 00:38:41.965599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:38:41.965613 | orchestrator | 2026-03-03 00:38:41.965623 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-03 00:38:41.965634 | orchestrator | Tuesday 03 March 2026 00:38:37 +0000 (0:00:00.422) 0:05:26.451 ********* 2026-03-03 00:38:41.965645 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:41.965655 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:41.965666 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:41.965677 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:41.965687 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:41.965698 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:41.965708 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:41.965719 | orchestrator | 2026-03-03 00:38:41.965730 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-03 00:38:41.965740 | orchestrator | Tuesday 03 March 2026 00:38:38 +0000 (0:00:00.866) 0:05:27.317 ********* 2026-03-03 00:38:41.965760 | orchestrator | ok: [testbed-manager] 2026-03-03 00:38:41.965771 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:38:41.965781 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:38:41.965792 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:38:41.965812 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:38:41.965823 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:38:41.965833 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:38:41.965844 | orchestrator | 2026-03-03 00:38:41.965855 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-03 00:38:41.965866 | orchestrator | Tuesday 03 March 2026 00:38:41 +0000 (0:00:02.863) 0:05:30.181 ********* 2026-03-03 00:38:41.965877 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-03 00:38:41.965891 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-03 00:38:41.965910 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-03 00:38:41.965927 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-03 00:38:41.965946 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-03 00:38:41.965965 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-03 00:38:41.965982 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:38:41.966002 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-03 00:38:41.966106 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-03 00:38:41.966135 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-03 00:38:41.966278 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:38:41.966336 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-03 00:38:41.966348 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-03 00:38:41.966359 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-03 00:38:41.966369 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:38:41.966380 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-03 00:38:41.966407 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:39:45.965863 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-03 00:39:45.965977 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-03 00:39:45.965994 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-03 00:39:45.966006 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-03 00:39:45.966131 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-03 00:39:45.966146 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:39:45.966159 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:39:45.966170 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-03 00:39:45.966181 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-03 00:39:45.966192 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-03 00:39:45.966204 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:39:45.966216 | orchestrator | 2026-03-03 00:39:45.966228 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-03 00:39:45.966241 | orchestrator | Tuesday 03 March 2026 00:38:42 +0000 (0:00:00.659) 0:05:30.840 ********* 2026-03-03 00:39:45.966253 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.966264 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.966275 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.966286 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.966297 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.966308 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.966319 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.966330 | orchestrator | 2026-03-03 00:39:45.966341 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-03 00:39:45.966352 | orchestrator | Tuesday 03 March 2026 00:38:49 +0000 (0:00:07.076) 0:05:37.917 ********* 2026-03-03 00:39:45.966363 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.966375 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.966386 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.966399 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.966413 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.966453 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.966467 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.966480 | orchestrator | 2026-03-03 00:39:45.966492 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-03 00:39:45.966503 | orchestrator | Tuesday 03 March 2026 00:38:50 +0000 (0:00:01.000) 0:05:38.917 ********* 2026-03-03 00:39:45.966514 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.966524 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.966535 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.966546 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.966557 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.966567 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.966578 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.966589 | orchestrator | 2026-03-03 00:39:45.966600 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-03 00:39:45.966611 | orchestrator | Tuesday 03 March 2026 00:38:59 +0000 (0:00:08.757) 0:05:47.674 ********* 2026-03-03 00:39:45.966623 | orchestrator | changed: [testbed-manager] 2026-03-03 00:39:45.966633 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.966644 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.966655 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.966666 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.966676 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.966687 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.966698 | orchestrator | 2026-03-03 00:39:45.966709 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-03 00:39:45.966720 | orchestrator | Tuesday 03 March 2026 00:39:02 +0000 (0:00:03.425) 0:05:51.100 ********* 2026-03-03 00:39:45.966731 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.966742 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.966753 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.966763 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.966774 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.966785 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.966796 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.966806 | orchestrator | 2026-03-03 00:39:45.966832 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-03 00:39:45.966844 | orchestrator | Tuesday 03 March 2026 00:39:03 +0000 (0:00:01.391) 0:05:52.492 ********* 2026-03-03 00:39:45.966855 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.966865 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.966876 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.966887 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.966898 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.966909 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.966919 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.966930 | orchestrator | 2026-03-03 00:39:45.966941 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-03 00:39:45.966952 | orchestrator | Tuesday 03 March 2026 00:39:05 +0000 (0:00:01.318) 0:05:53.811 ********* 2026-03-03 00:39:45.966963 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:39:45.966975 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:39:45.966986 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:39:45.966997 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:39:45.967007 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:39:45.967018 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:39:45.967029 | orchestrator | changed: [testbed-manager] 2026-03-03 00:39:45.967058 | orchestrator | 2026-03-03 00:39:45.967071 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-03 00:39:45.967082 | orchestrator | Tuesday 03 March 2026 00:39:05 +0000 (0:00:00.749) 0:05:54.560 ********* 2026-03-03 00:39:45.967093 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.967104 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.967115 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.967135 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.967146 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.967156 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.967167 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.967178 | orchestrator | 2026-03-03 00:39:45.967189 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-03 00:39:45.967218 | orchestrator | Tuesday 03 March 2026 00:39:15 +0000 (0:00:09.975) 0:06:04.536 ********* 2026-03-03 00:39:45.967230 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.967241 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.967252 | orchestrator | changed: [testbed-manager] 2026-03-03 00:39:45.967262 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.967273 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.967284 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.967294 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.967305 | orchestrator | 2026-03-03 00:39:45.967316 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-03 00:39:45.967327 | orchestrator | Tuesday 03 March 2026 00:39:16 +0000 (0:00:00.834) 0:06:05.371 ********* 2026-03-03 00:39:45.967338 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.967348 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.967359 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.967369 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.967380 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.967391 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.967401 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.967412 | orchestrator | 2026-03-03 00:39:45.967423 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-03 00:39:45.967434 | orchestrator | Tuesday 03 March 2026 00:39:28 +0000 (0:00:11.240) 0:06:16.611 ********* 2026-03-03 00:39:45.967444 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.967455 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.967466 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.967476 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.967487 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.967498 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.967508 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.967519 | orchestrator | 2026-03-03 00:39:45.967530 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-03 00:39:45.967541 | orchestrator | Tuesday 03 March 2026 00:39:39 +0000 (0:00:11.523) 0:06:28.135 ********* 2026-03-03 00:39:45.967552 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-03 00:39:45.967562 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-03 00:39:45.967573 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-03 00:39:45.967584 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-03 00:39:45.967594 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-03 00:39:45.967605 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-03 00:39:45.967616 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-03 00:39:45.967626 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-03 00:39:45.967637 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-03 00:39:45.967647 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-03 00:39:45.967658 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-03 00:39:45.967669 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-03 00:39:45.967679 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-03 00:39:45.967690 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-03 00:39:45.967701 | orchestrator | 2026-03-03 00:39:45.967711 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-03 00:39:45.967722 | orchestrator | Tuesday 03 March 2026 00:39:40 +0000 (0:00:01.091) 0:06:29.227 ********* 2026-03-03 00:39:45.967741 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:39:45.967751 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:39:45.967762 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:39:45.967773 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:39:45.967783 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:39:45.967794 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:39:45.967805 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:39:45.967815 | orchestrator | 2026-03-03 00:39:45.967826 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-03 00:39:45.967837 | orchestrator | Tuesday 03 March 2026 00:39:41 +0000 (0:00:00.425) 0:06:29.652 ********* 2026-03-03 00:39:45.967848 | orchestrator | ok: [testbed-manager] 2026-03-03 00:39:45.967880 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:39:45.967899 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:39:45.967918 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:39:45.967930 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:39:45.967940 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:39:45.967951 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:39:45.967962 | orchestrator | 2026-03-03 00:39:45.967973 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-03 00:39:45.967985 | orchestrator | Tuesday 03 March 2026 00:39:45 +0000 (0:00:04.140) 0:06:33.793 ********* 2026-03-03 00:39:45.967996 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:39:45.968007 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:39:45.968017 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:39:45.968028 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:39:45.968038 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:39:45.968069 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:39:45.968079 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:39:45.968090 | orchestrator | 2026-03-03 00:39:45.968102 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-03 00:39:45.968113 | orchestrator | Tuesday 03 March 2026 00:39:45 +0000 (0:00:00.534) 0:06:34.328 ********* 2026-03-03 00:39:45.968124 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-03 00:39:45.968135 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-03 00:39:45.968186 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-03 00:39:45.968199 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-03 00:39:45.968210 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:39:45.968221 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-03 00:39:45.968232 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-03 00:39:45.968243 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:39:45.968262 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-03 00:40:04.484258 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-03 00:40:04.484364 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:04.484381 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-03 00:40:04.484394 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-03 00:40:04.484405 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:04.484417 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-03 00:40:04.484428 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-03 00:40:04.484439 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:04.484450 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:04.484461 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-03 00:40:04.484472 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-03 00:40:04.484483 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:04.484495 | orchestrator | 2026-03-03 00:40:04.484507 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-03 00:40:04.484545 | orchestrator | Tuesday 03 March 2026 00:39:46 +0000 (0:00:00.463) 0:06:34.791 ********* 2026-03-03 00:40:04.484557 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:04.484568 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:04.484579 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:04.484590 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:04.484601 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:04.484612 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:04.484622 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:04.484633 | orchestrator | 2026-03-03 00:40:04.484644 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-03 00:40:04.484656 | orchestrator | Tuesday 03 March 2026 00:39:46 +0000 (0:00:00.450) 0:06:35.242 ********* 2026-03-03 00:40:04.484667 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:04.484677 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:04.484688 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:04.484699 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:04.484710 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:04.484720 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:04.484731 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:04.484743 | orchestrator | 2026-03-03 00:40:04.484754 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-03 00:40:04.484765 | orchestrator | Tuesday 03 March 2026 00:39:47 +0000 (0:00:00.471) 0:06:35.713 ********* 2026-03-03 00:40:04.484776 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:04.484788 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:04.484801 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:04.484814 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:04.484827 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:04.484839 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:04.484853 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:04.484866 | orchestrator | 2026-03-03 00:40:04.484879 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-03 00:40:04.484891 | orchestrator | Tuesday 03 March 2026 00:39:47 +0000 (0:00:00.595) 0:06:36.308 ********* 2026-03-03 00:40:04.484904 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.484918 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:04.484930 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:04.484943 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:04.484956 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:04.484969 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:04.484982 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:04.484995 | orchestrator | 2026-03-03 00:40:04.485028 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-03 00:40:04.485043 | orchestrator | Tuesday 03 March 2026 00:39:49 +0000 (0:00:01.904) 0:06:38.213 ********* 2026-03-03 00:40:04.485056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:40:04.485072 | orchestrator | 2026-03-03 00:40:04.485099 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-03 00:40:04.485113 | orchestrator | Tuesday 03 March 2026 00:39:50 +0000 (0:00:00.716) 0:06:38.930 ********* 2026-03-03 00:40:04.485126 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:04.485139 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:04.485152 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.485164 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:04.485176 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:04.485187 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:04.485197 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:04.485208 | orchestrator | 2026-03-03 00:40:04.485219 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-03 00:40:04.485243 | orchestrator | Tuesday 03 March 2026 00:39:51 +0000 (0:00:00.752) 0:06:39.683 ********* 2026-03-03 00:40:04.485262 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:04.485278 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:04.485297 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.485315 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:04.485334 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:04.485353 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:04.485367 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:04.485378 | orchestrator | 2026-03-03 00:40:04.485389 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-03 00:40:04.485400 | orchestrator | Tuesday 03 March 2026 00:39:51 +0000 (0:00:00.907) 0:06:40.590 ********* 2026-03-03 00:40:04.485411 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.485422 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:04.485433 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:04.485443 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:04.485454 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:04.485465 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:04.485475 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:04.485486 | orchestrator | 2026-03-03 00:40:04.485497 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-03 00:40:04.485528 | orchestrator | Tuesday 03 March 2026 00:39:53 +0000 (0:00:01.297) 0:06:41.888 ********* 2026-03-03 00:40:04.485541 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:04.485552 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:04.485563 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:04.485574 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:04.485585 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:04.485596 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:04.485607 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:04.485618 | orchestrator | 2026-03-03 00:40:04.485629 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-03 00:40:04.485640 | orchestrator | Tuesday 03 March 2026 00:39:54 +0000 (0:00:01.379) 0:06:43.267 ********* 2026-03-03 00:40:04.485651 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:04.485662 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:04.485673 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.485684 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:04.485695 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:04.485706 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:04.485717 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:04.485727 | orchestrator | 2026-03-03 00:40:04.485739 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-03 00:40:04.485750 | orchestrator | Tuesday 03 March 2026 00:39:55 +0000 (0:00:01.304) 0:06:44.571 ********* 2026-03-03 00:40:04.485761 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:04.485772 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:04.485783 | orchestrator | changed: [testbed-manager] 2026-03-03 00:40:04.485794 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:04.485804 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:04.485816 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:04.485826 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:04.485837 | orchestrator | 2026-03-03 00:40:04.485849 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-03 00:40:04.485860 | orchestrator | Tuesday 03 March 2026 00:39:57 +0000 (0:00:01.386) 0:06:45.958 ********* 2026-03-03 00:40:04.485872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:40:04.485883 | orchestrator | 2026-03-03 00:40:04.485894 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-03 00:40:04.485905 | orchestrator | Tuesday 03 March 2026 00:39:58 +0000 (0:00:01.023) 0:06:46.982 ********* 2026-03-03 00:40:04.485930 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:04.485941 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:04.485953 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.485963 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:04.485975 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:04.485985 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:04.485997 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:04.486123 | orchestrator | 2026-03-03 00:40:04.486141 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-03 00:40:04.486153 | orchestrator | Tuesday 03 March 2026 00:39:59 +0000 (0:00:01.383) 0:06:48.366 ********* 2026-03-03 00:40:04.486164 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:04.486174 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:04.486185 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:04.486196 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.486207 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:04.486217 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:04.486228 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:04.486239 | orchestrator | 2026-03-03 00:40:04.486250 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-03 00:40:04.486261 | orchestrator | Tuesday 03 March 2026 00:40:00 +0000 (0:00:01.131) 0:06:49.497 ********* 2026-03-03 00:40:04.486272 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:04.486282 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:04.486293 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:04.486304 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.486315 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:04.486326 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:04.486336 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:04.486347 | orchestrator | 2026-03-03 00:40:04.486358 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-03 00:40:04.486369 | orchestrator | Tuesday 03 March 2026 00:40:02 +0000 (0:00:01.403) 0:06:50.900 ********* 2026-03-03 00:40:04.486380 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:04.486391 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:04.486402 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:04.486412 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:04.486423 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:04.486434 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:04.486445 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:04.486455 | orchestrator | 2026-03-03 00:40:04.486467 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-03 00:40:04.486478 | orchestrator | Tuesday 03 March 2026 00:40:03 +0000 (0:00:01.168) 0:06:52.068 ********* 2026-03-03 00:40:04.486489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:40:04.486500 | orchestrator | 2026-03-03 00:40:04.486511 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:04.486522 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.871) 0:06:52.940 ********* 2026-03-03 00:40:04.486533 | orchestrator | 2026-03-03 00:40:04.486544 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:04.486555 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.037) 0:06:52.977 ********* 2026-03-03 00:40:04.486566 | orchestrator | 2026-03-03 00:40:04.486576 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:04.486587 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.042) 0:06:53.020 ********* 2026-03-03 00:40:04.486598 | orchestrator | 2026-03-03 00:40:04.486609 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:04.486629 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.047) 0:06:53.068 ********* 2026-03-03 00:40:31.660566 | orchestrator | 2026-03-03 00:40:31.660709 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:31.660726 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.040) 0:06:53.108 ********* 2026-03-03 00:40:31.660738 | orchestrator | 2026-03-03 00:40:31.660749 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:31.660760 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.045) 0:06:53.153 ********* 2026-03-03 00:40:31.660771 | orchestrator | 2026-03-03 00:40:31.660782 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-03 00:40:31.660793 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.038) 0:06:53.192 ********* 2026-03-03 00:40:31.660804 | orchestrator | 2026-03-03 00:40:31.660815 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-03 00:40:31.660829 | orchestrator | Tuesday 03 March 2026 00:40:04 +0000 (0:00:00.037) 0:06:53.230 ********* 2026-03-03 00:40:31.660848 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:31.660867 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:31.660884 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:31.660902 | orchestrator | 2026-03-03 00:40:31.660919 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-03 00:40:31.660938 | orchestrator | Tuesday 03 March 2026 00:40:05 +0000 (0:00:01.287) 0:06:54.517 ********* 2026-03-03 00:40:31.660957 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:31.661046 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:31.661059 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:31.661070 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:31.661081 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:31.661149 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:31.661187 | orchestrator | changed: [testbed-manager] 2026-03-03 00:40:31.661205 | orchestrator | 2026-03-03 00:40:31.661223 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-03 00:40:31.661239 | orchestrator | Tuesday 03 March 2026 00:40:08 +0000 (0:00:02.301) 0:06:56.819 ********* 2026-03-03 00:40:31.661258 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:31.661276 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:31.661294 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:31.661314 | orchestrator | changed: [testbed-manager] 2026-03-03 00:40:31.661333 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:31.661351 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:31.661364 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:31.661377 | orchestrator | 2026-03-03 00:40:31.661391 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-03 00:40:31.661403 | orchestrator | Tuesday 03 March 2026 00:40:09 +0000 (0:00:01.228) 0:06:58.047 ********* 2026-03-03 00:40:31.661415 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:31.661426 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:31.661436 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:31.661447 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:31.661457 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:31.661468 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:31.661479 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:31.661489 | orchestrator | 2026-03-03 00:40:31.661500 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-03 00:40:31.661511 | orchestrator | Tuesday 03 March 2026 00:40:11 +0000 (0:00:02.264) 0:07:00.311 ********* 2026-03-03 00:40:31.661521 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:31.661532 | orchestrator | 2026-03-03 00:40:31.661542 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-03 00:40:31.661553 | orchestrator | Tuesday 03 March 2026 00:40:11 +0000 (0:00:00.091) 0:07:00.403 ********* 2026-03-03 00:40:31.661564 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:31.661574 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:31.661585 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:31.661596 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:31.661619 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:31.661630 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:31.661641 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:40:31.661652 | orchestrator | 2026-03-03 00:40:31.661678 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-03 00:40:31.661690 | orchestrator | Tuesday 03 March 2026 00:40:12 +0000 (0:00:01.011) 0:07:01.414 ********* 2026-03-03 00:40:31.661700 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:31.661711 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:31.661721 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:31.661732 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:31.661742 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:31.661753 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:31.661764 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:31.661774 | orchestrator | 2026-03-03 00:40:31.661785 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-03 00:40:31.661795 | orchestrator | Tuesday 03 March 2026 00:40:13 +0000 (0:00:00.699) 0:07:02.114 ********* 2026-03-03 00:40:31.661807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:40:31.661821 | orchestrator | 2026-03-03 00:40:31.661831 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-03 00:40:31.661842 | orchestrator | Tuesday 03 March 2026 00:40:14 +0000 (0:00:00.846) 0:07:02.960 ********* 2026-03-03 00:40:31.661852 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:31.661863 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:31.661874 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:31.661884 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:31.661895 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:31.661905 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:31.661916 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:31.661926 | orchestrator | 2026-03-03 00:40:31.661937 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-03 00:40:31.661948 | orchestrator | Tuesday 03 March 2026 00:40:15 +0000 (0:00:00.863) 0:07:03.824 ********* 2026-03-03 00:40:31.661958 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-03 00:40:31.662072 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-03 00:40:31.662089 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-03 00:40:31.662100 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-03 00:40:31.662111 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-03 00:40:31.662155 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-03 00:40:31.662167 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-03 00:40:31.662178 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-03 00:40:31.662189 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-03 00:40:31.662199 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-03 00:40:31.662210 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-03 00:40:31.662220 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-03 00:40:31.662231 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-03 00:40:31.662241 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-03 00:40:31.662252 | orchestrator | 2026-03-03 00:40:31.662263 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-03 00:40:31.662273 | orchestrator | Tuesday 03 March 2026 00:40:17 +0000 (0:00:02.755) 0:07:06.579 ********* 2026-03-03 00:40:31.662284 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:31.662295 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:31.662305 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:31.662325 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:31.662336 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:31.662347 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:31.662357 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:31.662368 | orchestrator | 2026-03-03 00:40:31.662378 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-03 00:40:31.662389 | orchestrator | Tuesday 03 March 2026 00:40:18 +0000 (0:00:00.541) 0:07:07.121 ********* 2026-03-03 00:40:31.662402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:40:31.662415 | orchestrator | 2026-03-03 00:40:31.662426 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-03 00:40:31.662436 | orchestrator | Tuesday 03 March 2026 00:40:19 +0000 (0:00:00.783) 0:07:07.905 ********* 2026-03-03 00:40:31.662447 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:31.662458 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:31.662468 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:31.662479 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:31.662490 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:31.662500 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:31.662511 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:31.662521 | orchestrator | 2026-03-03 00:40:31.662532 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-03 00:40:31.662543 | orchestrator | Tuesday 03 March 2026 00:40:20 +0000 (0:00:01.026) 0:07:08.931 ********* 2026-03-03 00:40:31.662554 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:31.662564 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:31.662575 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:31.662586 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:31.662596 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:31.662607 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:31.662617 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:31.662627 | orchestrator | 2026-03-03 00:40:31.662638 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-03 00:40:31.662649 | orchestrator | Tuesday 03 March 2026 00:40:21 +0000 (0:00:00.844) 0:07:09.776 ********* 2026-03-03 00:40:31.662660 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:31.662670 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:31.662689 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:31.662700 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:31.662711 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:31.662721 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:31.662732 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:31.662742 | orchestrator | 2026-03-03 00:40:31.662753 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-03 00:40:31.662764 | orchestrator | Tuesday 03 March 2026 00:40:21 +0000 (0:00:00.466) 0:07:10.243 ********* 2026-03-03 00:40:31.662774 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:40:31.662785 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:31.662795 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:40:31.662806 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:40:31.662817 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:40:31.662827 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:40:31.662838 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:40:31.662848 | orchestrator | 2026-03-03 00:40:31.662859 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-03 00:40:31.662870 | orchestrator | Tuesday 03 March 2026 00:40:23 +0000 (0:00:01.634) 0:07:11.878 ********* 2026-03-03 00:40:31.662881 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:40:31.662891 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:40:31.662902 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:40:31.662913 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:40:31.662930 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:40:31.662941 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:40:31.662951 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:40:31.662962 | orchestrator | 2026-03-03 00:40:31.663002 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-03 00:40:31.663021 | orchestrator | Tuesday 03 March 2026 00:40:23 +0000 (0:00:00.475) 0:07:12.353 ********* 2026-03-03 00:40:31.663041 | orchestrator | ok: [testbed-manager] 2026-03-03 00:40:31.663060 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:40:31.663072 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:40:31.663082 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:40:31.663093 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:40:31.663104 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:40:31.663126 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:03.963211 | orchestrator | 2026-03-03 00:41:03.963319 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-03 00:41:03.963334 | orchestrator | Tuesday 03 March 2026 00:40:31 +0000 (0:00:07.948) 0:07:20.301 ********* 2026-03-03 00:41:03.963344 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:03.963356 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:03.963365 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.963376 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:03.963386 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:03.963395 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:03.963405 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:03.963414 | orchestrator | 2026-03-03 00:41:03.963424 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-03 00:41:03.963434 | orchestrator | Tuesday 03 March 2026 00:40:33 +0000 (0:00:01.324) 0:07:21.626 ********* 2026-03-03 00:41:03.963444 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.963453 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:03.963463 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:03.963472 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:03.963482 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:03.963491 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:03.963501 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:03.963510 | orchestrator | 2026-03-03 00:41:03.963520 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-03 00:41:03.963530 | orchestrator | Tuesday 03 March 2026 00:40:34 +0000 (0:00:01.783) 0:07:23.410 ********* 2026-03-03 00:41:03.963539 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:03.963548 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:03.963558 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.963567 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:03.963577 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:03.963586 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:03.963595 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:03.963605 | orchestrator | 2026-03-03 00:41:03.963614 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-03 00:41:03.963624 | orchestrator | Tuesday 03 March 2026 00:40:36 +0000 (0:00:01.671) 0:07:25.082 ********* 2026-03-03 00:41:03.963634 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.963643 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.963653 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.963662 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.963672 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.963681 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.963690 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.963700 | orchestrator | 2026-03-03 00:41:03.963709 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-03 00:41:03.963719 | orchestrator | Tuesday 03 March 2026 00:40:37 +0000 (0:00:01.091) 0:07:26.174 ********* 2026-03-03 00:41:03.963728 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:41:03.963738 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:41:03.963776 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:41:03.963789 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:41:03.963801 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:41:03.963812 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:41:03.963823 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:41:03.963834 | orchestrator | 2026-03-03 00:41:03.963847 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-03 00:41:03.963858 | orchestrator | Tuesday 03 March 2026 00:40:38 +0000 (0:00:00.780) 0:07:26.954 ********* 2026-03-03 00:41:03.963869 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:41:03.963879 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:41:03.963890 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:41:03.963901 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:41:03.963936 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:41:03.963949 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:41:03.963960 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:41:03.963970 | orchestrator | 2026-03-03 00:41:03.963982 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-03 00:41:03.963994 | orchestrator | Tuesday 03 March 2026 00:40:38 +0000 (0:00:00.494) 0:07:27.448 ********* 2026-03-03 00:41:03.964005 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964018 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964028 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964040 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964051 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964062 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964073 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964085 | orchestrator | 2026-03-03 00:41:03.964096 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-03 00:41:03.964106 | orchestrator | Tuesday 03 March 2026 00:40:39 +0000 (0:00:00.478) 0:07:27.927 ********* 2026-03-03 00:41:03.964115 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964125 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964134 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964144 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964153 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964162 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964172 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964181 | orchestrator | 2026-03-03 00:41:03.964190 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-03 00:41:03.964200 | orchestrator | Tuesday 03 March 2026 00:40:39 +0000 (0:00:00.658) 0:07:28.585 ********* 2026-03-03 00:41:03.964209 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964219 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964228 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964237 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964246 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964256 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964265 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964274 | orchestrator | 2026-03-03 00:41:03.964284 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-03 00:41:03.964293 | orchestrator | Tuesday 03 March 2026 00:40:40 +0000 (0:00:00.506) 0:07:29.092 ********* 2026-03-03 00:41:03.964303 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964312 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964322 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964331 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964340 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964350 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964359 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964368 | orchestrator | 2026-03-03 00:41:03.964394 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-03 00:41:03.964405 | orchestrator | Tuesday 03 March 2026 00:40:46 +0000 (0:00:05.537) 0:07:34.630 ********* 2026-03-03 00:41:03.964414 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:41:03.964432 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:41:03.964442 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:41:03.964468 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:41:03.964479 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:41:03.964488 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:41:03.964498 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:41:03.964507 | orchestrator | 2026-03-03 00:41:03.964517 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-03 00:41:03.964527 | orchestrator | Tuesday 03 March 2026 00:40:46 +0000 (0:00:00.435) 0:07:35.065 ********* 2026-03-03 00:41:03.964538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:41:03.964550 | orchestrator | 2026-03-03 00:41:03.964560 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-03 00:41:03.964570 | orchestrator | Tuesday 03 March 2026 00:40:47 +0000 (0:00:00.909) 0:07:35.974 ********* 2026-03-03 00:41:03.964579 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964589 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964598 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964608 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964617 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964626 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964636 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964645 | orchestrator | 2026-03-03 00:41:03.964655 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-03 00:41:03.964664 | orchestrator | Tuesday 03 March 2026 00:40:49 +0000 (0:00:01.859) 0:07:37.834 ********* 2026-03-03 00:41:03.964674 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964683 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964692 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964702 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964711 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964720 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964730 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964739 | orchestrator | 2026-03-03 00:41:03.964749 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-03 00:41:03.964759 | orchestrator | Tuesday 03 March 2026 00:40:50 +0000 (0:00:01.055) 0:07:38.889 ********* 2026-03-03 00:41:03.964768 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:03.964778 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:03.964787 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:03.964797 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:03.964806 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:03.964816 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:03.964825 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:03.964834 | orchestrator | 2026-03-03 00:41:03.964844 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-03 00:41:03.964854 | orchestrator | Tuesday 03 March 2026 00:40:51 +0000 (0:00:00.799) 0:07:39.689 ********* 2026-03-03 00:41:03.964863 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964875 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964885 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964899 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964908 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964940 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964950 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-03 00:41:03.964959 | orchestrator | 2026-03-03 00:41:03.964969 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-03 00:41:03.964979 | orchestrator | Tuesday 03 March 2026 00:40:52 +0000 (0:00:01.782) 0:07:41.472 ********* 2026-03-03 00:41:03.964988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:41:03.964998 | orchestrator | 2026-03-03 00:41:03.965008 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-03 00:41:03.965017 | orchestrator | Tuesday 03 March 2026 00:40:53 +0000 (0:00:00.681) 0:07:42.153 ********* 2026-03-03 00:41:03.965027 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:03.965037 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:03.965046 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:03.965056 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:03.965065 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:03.965075 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:03.965084 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:03.965094 | orchestrator | 2026-03-03 00:41:03.965109 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-03 00:41:34.073539 | orchestrator | Tuesday 03 March 2026 00:41:03 +0000 (0:00:10.395) 0:07:52.549 ********* 2026-03-03 00:41:34.073611 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:34.073619 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:34.073623 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:34.073627 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:34.073631 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:34.073635 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:34.073639 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:34.073645 | orchestrator | 2026-03-03 00:41:34.073652 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-03 00:41:34.073659 | orchestrator | Tuesday 03 March 2026 00:41:06 +0000 (0:00:02.474) 0:07:55.024 ********* 2026-03-03 00:41:34.073665 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:34.073672 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:34.073678 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:34.073684 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:34.073690 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:34.073696 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:34.073702 | orchestrator | 2026-03-03 00:41:34.073708 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-03 00:41:34.073715 | orchestrator | Tuesday 03 March 2026 00:41:07 +0000 (0:00:01.289) 0:07:56.314 ********* 2026-03-03 00:41:34.073722 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.073729 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.073735 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.073741 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.073747 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.073753 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.073759 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.073765 | orchestrator | 2026-03-03 00:41:34.073771 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-03 00:41:34.073778 | orchestrator | 2026-03-03 00:41:34.073784 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-03 00:41:34.073791 | orchestrator | Tuesday 03 March 2026 00:41:09 +0000 (0:00:01.325) 0:07:57.639 ********* 2026-03-03 00:41:34.073797 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:41:34.073823 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:41:34.073827 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:41:34.073831 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:41:34.073835 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:41:34.073839 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:41:34.073843 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:41:34.073846 | orchestrator | 2026-03-03 00:41:34.073851 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-03 00:41:34.073855 | orchestrator | 2026-03-03 00:41:34.073859 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-03 00:41:34.073863 | orchestrator | Tuesday 03 March 2026 00:41:09 +0000 (0:00:00.424) 0:07:58.064 ********* 2026-03-03 00:41:34.073900 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.073906 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.073910 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.073915 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.073919 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.073923 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.073926 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.073930 | orchestrator | 2026-03-03 00:41:34.073934 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-03 00:41:34.073938 | orchestrator | Tuesday 03 March 2026 00:41:10 +0000 (0:00:01.281) 0:07:59.346 ********* 2026-03-03 00:41:34.073942 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:34.073946 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:34.073950 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:34.073954 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:34.073957 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:34.073961 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:34.073965 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:34.073969 | orchestrator | 2026-03-03 00:41:34.073972 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-03 00:41:34.073976 | orchestrator | Tuesday 03 March 2026 00:41:12 +0000 (0:00:01.338) 0:08:00.684 ********* 2026-03-03 00:41:34.073980 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:41:34.073995 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:41:34.073999 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:41:34.074003 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:41:34.074007 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:41:34.074011 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:41:34.074051 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:41:34.074055 | orchestrator | 2026-03-03 00:41:34.074059 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-03 00:41:34.074063 | orchestrator | Tuesday 03 March 2026 00:41:12 +0000 (0:00:00.590) 0:08:01.275 ********* 2026-03-03 00:41:34.074067 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:41:34.074073 | orchestrator | 2026-03-03 00:41:34.074077 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-03 00:41:34.074082 | orchestrator | Tuesday 03 March 2026 00:41:13 +0000 (0:00:00.719) 0:08:01.995 ********* 2026-03-03 00:41:34.074088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:41:34.074095 | orchestrator | 2026-03-03 00:41:34.074099 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-03 00:41:34.074104 | orchestrator | Tuesday 03 March 2026 00:41:14 +0000 (0:00:00.688) 0:08:02.683 ********* 2026-03-03 00:41:34.074108 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074113 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074117 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074122 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074131 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074136 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074141 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074145 | orchestrator | 2026-03-03 00:41:34.074161 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-03 00:41:34.074165 | orchestrator | Tuesday 03 March 2026 00:41:23 +0000 (0:00:09.421) 0:08:12.104 ********* 2026-03-03 00:41:34.074169 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074173 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074176 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074180 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074184 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074188 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074191 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074195 | orchestrator | 2026-03-03 00:41:34.074199 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-03 00:41:34.074203 | orchestrator | Tuesday 03 March 2026 00:41:24 +0000 (0:00:00.773) 0:08:12.878 ********* 2026-03-03 00:41:34.074207 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074210 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074214 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074218 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074221 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074225 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074229 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074233 | orchestrator | 2026-03-03 00:41:34.074236 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-03 00:41:34.074240 | orchestrator | Tuesday 03 March 2026 00:41:25 +0000 (0:00:01.286) 0:08:14.164 ********* 2026-03-03 00:41:34.074244 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074248 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074251 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074255 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074259 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074262 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074266 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074270 | orchestrator | 2026-03-03 00:41:34.074274 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-03 00:41:34.074277 | orchestrator | Tuesday 03 March 2026 00:41:27 +0000 (0:00:01.796) 0:08:15.960 ********* 2026-03-03 00:41:34.074281 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074285 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074289 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074292 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074296 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074300 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074303 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074307 | orchestrator | 2026-03-03 00:41:34.074311 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-03 00:41:34.074315 | orchestrator | Tuesday 03 March 2026 00:41:28 +0000 (0:00:01.207) 0:08:17.168 ********* 2026-03-03 00:41:34.074318 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074322 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074326 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074330 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074333 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074337 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074341 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074345 | orchestrator | 2026-03-03 00:41:34.074349 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-03 00:41:34.074352 | orchestrator | 2026-03-03 00:41:34.074356 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-03 00:41:34.074360 | orchestrator | Tuesday 03 March 2026 00:41:29 +0000 (0:00:01.110) 0:08:18.278 ********* 2026-03-03 00:41:34.074368 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:41:34.074372 | orchestrator | 2026-03-03 00:41:34.074375 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-03 00:41:34.074379 | orchestrator | Tuesday 03 March 2026 00:41:30 +0000 (0:00:00.808) 0:08:19.087 ********* 2026-03-03 00:41:34.074383 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:34.074389 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:34.074393 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:34.074397 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:34.074401 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:34.074404 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:34.074408 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:34.074412 | orchestrator | 2026-03-03 00:41:34.074416 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-03 00:41:34.074419 | orchestrator | Tuesday 03 March 2026 00:41:31 +0000 (0:00:00.849) 0:08:19.937 ********* 2026-03-03 00:41:34.074423 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:34.074427 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:34.074431 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:34.074435 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:34.074438 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:34.074442 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:34.074446 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:34.074449 | orchestrator | 2026-03-03 00:41:34.074453 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-03 00:41:34.074457 | orchestrator | Tuesday 03 March 2026 00:41:32 +0000 (0:00:01.066) 0:08:21.004 ********* 2026-03-03 00:41:34.074461 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:41:34.074465 | orchestrator | 2026-03-03 00:41:34.074468 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-03 00:41:34.074472 | orchestrator | Tuesday 03 March 2026 00:41:33 +0000 (0:00:00.827) 0:08:21.831 ********* 2026-03-03 00:41:34.074476 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:41:34.074480 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:41:34.074483 | orchestrator | ok: [testbed-manager] 2026-03-03 00:41:34.074487 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:41:34.074491 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:41:34.074495 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:41:34.074498 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:41:34.074502 | orchestrator | 2026-03-03 00:41:34.074508 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-03 00:41:35.352765 | orchestrator | Tuesday 03 March 2026 00:41:34 +0000 (0:00:00.827) 0:08:22.658 ********* 2026-03-03 00:41:35.352837 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:41:35.352844 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:41:35.352848 | orchestrator | changed: [testbed-manager] 2026-03-03 00:41:35.352852 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:41:35.352856 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:41:35.352860 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:41:35.352927 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:41:35.352932 | orchestrator | 2026-03-03 00:41:35.352937 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:41:35.352943 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-03 00:41:35.352949 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-03 00:41:35.352953 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-03 00:41:35.352977 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-03 00:41:35.352981 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-03 00:41:35.352985 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-03 00:41:35.352989 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-03 00:41:35.352993 | orchestrator | 2026-03-03 00:41:35.352997 | orchestrator | 2026-03-03 00:41:35.353001 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:41:35.353005 | orchestrator | Tuesday 03 March 2026 00:41:35 +0000 (0:00:01.013) 0:08:23.672 ********* 2026-03-03 00:41:35.353009 | orchestrator | =============================================================================== 2026-03-03 00:41:35.353012 | orchestrator | osism.commons.packages : Install required packages --------------------- 88.43s 2026-03-03 00:41:35.353016 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.03s 2026-03-03 00:41:35.353020 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.82s 2026-03-03 00:41:35.353024 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.50s 2026-03-03 00:41:35.353028 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.52s 2026-03-03 00:41:35.353031 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.24s 2026-03-03 00:41:35.353035 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.43s 2026-03-03 00:41:35.353039 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.40s 2026-03-03 00:41:35.353043 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.35s 2026-03-03 00:41:35.353048 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.98s 2026-03-03 00:41:35.353051 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.42s 2026-03-03 00:41:35.353065 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.10s 2026-03-03 00:41:35.353069 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.76s 2026-03-03 00:41:35.353073 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.50s 2026-03-03 00:41:35.353077 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.95s 2026-03-03 00:41:35.353081 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.72s 2026-03-03 00:41:35.353085 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.08s 2026-03-03 00:41:35.353088 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.35s 2026-03-03 00:41:35.353092 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.08s 2026-03-03 00:41:35.353096 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.92s 2026-03-03 00:41:35.551066 | orchestrator | + osism apply fail2ban 2026-03-03 00:41:48.090375 | orchestrator | 2026-03-03 00:41:48 | INFO  | Prepare task for execution of fail2ban. 2026-03-03 00:41:48.145844 | orchestrator | 2026-03-03 00:41:48 | INFO  | Task 472709d1-3d58-45e9-9c53-dd54c5478729 (fail2ban) was prepared for execution. 2026-03-03 00:41:48.145970 | orchestrator | 2026-03-03 00:41:48 | INFO  | It takes a moment until task 472709d1-3d58-45e9-9c53-dd54c5478729 (fail2ban) has been started and output is visible here. 2026-03-03 00:42:10.360678 | orchestrator | 2026-03-03 00:42:10.360879 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-03 00:42:10.360930 | orchestrator | 2026-03-03 00:42:10.360943 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-03 00:42:10.360955 | orchestrator | Tuesday 03 March 2026 00:41:52 +0000 (0:00:00.230) 0:00:00.230 ********* 2026-03-03 00:42:10.360967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:42:10.360980 | orchestrator | 2026-03-03 00:42:10.360992 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-03 00:42:10.361002 | orchestrator | Tuesday 03 March 2026 00:41:53 +0000 (0:00:00.985) 0:00:01.215 ********* 2026-03-03 00:42:10.361013 | orchestrator | changed: [testbed-manager] 2026-03-03 00:42:10.361025 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:42:10.361036 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:42:10.361047 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:42:10.361057 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:42:10.361068 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:42:10.361079 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:42:10.361090 | orchestrator | 2026-03-03 00:42:10.361101 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-03 00:42:10.361111 | orchestrator | Tuesday 03 March 2026 00:42:05 +0000 (0:00:12.316) 0:00:13.532 ********* 2026-03-03 00:42:10.361122 | orchestrator | changed: [testbed-manager] 2026-03-03 00:42:10.361133 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:42:10.361144 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:42:10.361154 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:42:10.361171 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:42:10.361185 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:42:10.361195 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:42:10.361206 | orchestrator | 2026-03-03 00:42:10.361219 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-03 00:42:10.361232 | orchestrator | Tuesday 03 March 2026 00:42:06 +0000 (0:00:01.328) 0:00:14.860 ********* 2026-03-03 00:42:10.361244 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:10.361258 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:10.361271 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:10.361283 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:10.361296 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:10.361310 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:10.361328 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:10.361340 | orchestrator | 2026-03-03 00:42:10.361353 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-03 00:42:10.361367 | orchestrator | Tuesday 03 March 2026 00:42:08 +0000 (0:00:01.582) 0:00:16.443 ********* 2026-03-03 00:42:10.361379 | orchestrator | changed: [testbed-manager] 2026-03-03 00:42:10.361394 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:42:10.361413 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:42:10.361426 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:42:10.361438 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:42:10.361451 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:42:10.361463 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:42:10.361476 | orchestrator | 2026-03-03 00:42:10.361488 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:42:10.361501 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361515 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361529 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361542 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361579 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361592 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361607 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:42:10.361623 | orchestrator | 2026-03-03 00:42:10.361635 | orchestrator | 2026-03-03 00:42:10.361645 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:42:10.361656 | orchestrator | Tuesday 03 March 2026 00:42:10 +0000 (0:00:01.641) 0:00:18.084 ********* 2026-03-03 00:42:10.361667 | orchestrator | =============================================================================== 2026-03-03 00:42:10.361677 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.32s 2026-03-03 00:42:10.361688 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.64s 2026-03-03 00:42:10.361698 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-03-03 00:42:10.361709 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.33s 2026-03-03 00:42:10.361720 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.99s 2026-03-03 00:42:10.685244 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-03 00:42:10.685334 | orchestrator | + osism apply network 2026-03-03 00:42:22.771099 | orchestrator | 2026-03-03 00:42:22 | INFO  | Prepare task for execution of network. 2026-03-03 00:42:22.836113 | orchestrator | 2026-03-03 00:42:22 | INFO  | Task 3df69db6-e0e5-4064-826c-c920c2d18344 (network) was prepared for execution. 2026-03-03 00:42:22.836217 | orchestrator | 2026-03-03 00:42:22 | INFO  | It takes a moment until task 3df69db6-e0e5-4064-826c-c920c2d18344 (network) has been started and output is visible here. 2026-03-03 00:42:53.090291 | orchestrator | 2026-03-03 00:42:53.090392 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-03 00:42:53.090410 | orchestrator | 2026-03-03 00:42:53.090424 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-03 00:42:53.090436 | orchestrator | Tuesday 03 March 2026 00:42:27 +0000 (0:00:00.303) 0:00:00.303 ********* 2026-03-03 00:42:53.090448 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.090460 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.090471 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.090482 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.090493 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.090504 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.090515 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.090526 | orchestrator | 2026-03-03 00:42:53.090537 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-03 00:42:53.090548 | orchestrator | Tuesday 03 March 2026 00:42:28 +0000 (0:00:00.663) 0:00:00.966 ********* 2026-03-03 00:42:53.090561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:42:53.090574 | orchestrator | 2026-03-03 00:42:53.090585 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-03 00:42:53.090597 | orchestrator | Tuesday 03 March 2026 00:42:29 +0000 (0:00:01.262) 0:00:02.229 ********* 2026-03-03 00:42:53.090607 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.090619 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.090629 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.090640 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.090651 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.090684 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.090695 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.090706 | orchestrator | 2026-03-03 00:42:53.090718 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-03 00:42:53.090729 | orchestrator | Tuesday 03 March 2026 00:42:32 +0000 (0:00:02.414) 0:00:04.643 ********* 2026-03-03 00:42:53.090740 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.090789 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.090801 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.090812 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.090823 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.090836 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.090848 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.090861 | orchestrator | 2026-03-03 00:42:53.090874 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-03 00:42:53.090887 | orchestrator | Tuesday 03 March 2026 00:42:34 +0000 (0:00:01.927) 0:00:06.570 ********* 2026-03-03 00:42:53.090901 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-03 00:42:53.090914 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-03 00:42:53.090927 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-03 00:42:53.090940 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-03 00:42:53.090953 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-03 00:42:53.090966 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-03 00:42:53.090978 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-03 00:42:53.090991 | orchestrator | 2026-03-03 00:42:53.091004 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-03 00:42:53.091017 | orchestrator | Tuesday 03 March 2026 00:42:35 +0000 (0:00:01.147) 0:00:07.717 ********* 2026-03-03 00:42:53.091030 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-03 00:42:53.091044 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-03 00:42:53.091058 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 00:42:53.091071 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 00:42:53.091083 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 00:42:53.091096 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-03 00:42:53.091110 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-03 00:42:53.091122 | orchestrator | 2026-03-03 00:42:53.091135 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-03 00:42:53.091148 | orchestrator | Tuesday 03 March 2026 00:42:38 +0000 (0:00:03.565) 0:00:11.283 ********* 2026-03-03 00:42:53.091161 | orchestrator | changed: [testbed-manager] 2026-03-03 00:42:53.091175 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:42:53.091188 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:42:53.091200 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:42:53.091211 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:42:53.091222 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:42:53.091233 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:42:53.091243 | orchestrator | 2026-03-03 00:42:53.091254 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-03 00:42:53.091265 | orchestrator | Tuesday 03 March 2026 00:42:40 +0000 (0:00:01.634) 0:00:12.917 ********* 2026-03-03 00:42:53.091276 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 00:42:53.091287 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-03 00:42:53.091298 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 00:42:53.091309 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-03 00:42:53.091336 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 00:42:53.091347 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-03 00:42:53.091358 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-03 00:42:53.091369 | orchestrator | 2026-03-03 00:42:53.091380 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-03 00:42:53.091391 | orchestrator | Tuesday 03 March 2026 00:42:42 +0000 (0:00:01.702) 0:00:14.619 ********* 2026-03-03 00:42:53.091411 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.091422 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.091433 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.091444 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.091454 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.091465 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.091476 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.091487 | orchestrator | 2026-03-03 00:42:53.091498 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-03 00:42:53.091527 | orchestrator | Tuesday 03 March 2026 00:42:43 +0000 (0:00:00.995) 0:00:15.615 ********* 2026-03-03 00:42:53.091539 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:42:53.091551 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:42:53.091585 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:42:53.091597 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:42:53.091608 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:42:53.091618 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:42:53.091641 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:42:53.091652 | orchestrator | 2026-03-03 00:42:53.091663 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-03 00:42:53.091674 | orchestrator | Tuesday 03 March 2026 00:42:43 +0000 (0:00:00.624) 0:00:16.239 ********* 2026-03-03 00:42:53.091685 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.091696 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.091707 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.091718 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.091729 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.091740 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.091785 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.091797 | orchestrator | 2026-03-03 00:42:53.091808 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-03 00:42:53.091819 | orchestrator | Tuesday 03 March 2026 00:42:45 +0000 (0:00:02.146) 0:00:18.386 ********* 2026-03-03 00:42:53.091830 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:42:53.091841 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:42:53.091852 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:42:53.091863 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:42:53.091873 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:42:53.091884 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:42:53.091895 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-03 00:42:53.091908 | orchestrator | 2026-03-03 00:42:53.091919 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-03 00:42:53.091930 | orchestrator | Tuesday 03 March 2026 00:42:46 +0000 (0:00:00.863) 0:00:19.249 ********* 2026-03-03 00:42:53.091940 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.091951 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:42:53.091962 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:42:53.091972 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:42:53.091983 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:42:53.091994 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:42:53.092005 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:42:53.092016 | orchestrator | 2026-03-03 00:42:53.092026 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-03 00:42:53.092037 | orchestrator | Tuesday 03 March 2026 00:42:48 +0000 (0:00:01.817) 0:00:21.066 ********* 2026-03-03 00:42:53.092049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:42:53.092061 | orchestrator | 2026-03-03 00:42:53.092072 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-03 00:42:53.092091 | orchestrator | Tuesday 03 March 2026 00:42:49 +0000 (0:00:01.085) 0:00:22.151 ********* 2026-03-03 00:42:53.092102 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.092112 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.092123 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.092134 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.092144 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.092155 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.092166 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.092177 | orchestrator | 2026-03-03 00:42:53.092188 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-03 00:42:53.092199 | orchestrator | Tuesday 03 March 2026 00:42:51 +0000 (0:00:01.733) 0:00:23.885 ********* 2026-03-03 00:42:53.092210 | orchestrator | ok: [testbed-manager] 2026-03-03 00:42:53.092226 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:42:53.092237 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:42:53.092248 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:42:53.092259 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:42:53.092269 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:42:53.092280 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:42:53.092290 | orchestrator | 2026-03-03 00:42:53.092301 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-03 00:42:53.092312 | orchestrator | Tuesday 03 March 2026 00:42:52 +0000 (0:00:00.677) 0:00:24.562 ********* 2026-03-03 00:42:53.092323 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092334 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092345 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092355 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092366 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092377 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092388 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092398 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092409 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092420 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-03 00:42:53.092430 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092441 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092452 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092463 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-03 00:42:53.092474 | orchestrator | 2026-03-03 00:42:53.092492 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-03 00:43:07.911759 | orchestrator | Tuesday 03 March 2026 00:42:53 +0000 (0:00:01.003) 0:00:25.566 ********* 2026-03-03 00:43:07.911849 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:07.911859 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:07.911866 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:07.911873 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:07.911879 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:07.911886 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:07.911892 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:07.911899 | orchestrator | 2026-03-03 00:43:07.911906 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-03 00:43:07.911913 | orchestrator | Tuesday 03 March 2026 00:42:53 +0000 (0:00:00.541) 0:00:26.107 ********* 2026-03-03 00:43:07.911920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:43:07.911947 | orchestrator | 2026-03-03 00:43:07.911954 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-03 00:43:07.911961 | orchestrator | Tuesday 03 March 2026 00:42:57 +0000 (0:00:04.230) 0:00:30.337 ********* 2026-03-03 00:43:07.911968 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.911977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.911985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.911991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.911998 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912022 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912097 | orchestrator | 2026-03-03 00:43:07.912104 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-03 00:43:07.912110 | orchestrator | Tuesday 03 March 2026 00:43:03 +0000 (0:00:05.162) 0:00:35.500 ********* 2026-03-03 00:43:07.912117 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912136 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-03 00:43:07.912180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:07.912217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:21.741256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-03 00:43:21.741390 | orchestrator | 2026-03-03 00:43:21.741419 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-03 00:43:21.741440 | orchestrator | Tuesday 03 March 2026 00:43:08 +0000 (0:00:05.112) 0:00:40.612 ********* 2026-03-03 00:43:21.741459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:43:21.741478 | orchestrator | 2026-03-03 00:43:21.741496 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-03 00:43:21.741508 | orchestrator | Tuesday 03 March 2026 00:43:09 +0000 (0:00:01.046) 0:00:41.659 ********* 2026-03-03 00:43:21.741519 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:21.741530 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:43:21.741540 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:43:21.741549 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:43:21.741559 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:43:21.741568 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:43:21.741578 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:43:21.741588 | orchestrator | 2026-03-03 00:43:21.741598 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-03 00:43:21.741608 | orchestrator | Tuesday 03 March 2026 00:43:11 +0000 (0:00:01.862) 0:00:43.522 ********* 2026-03-03 00:43:21.741617 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.741628 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.741637 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.741647 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.741657 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.741666 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.741676 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.741686 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.741696 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:21.741706 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.741748 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.741758 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.741767 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.741777 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:21.741804 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.741816 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.741828 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.741867 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.741885 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:21.741901 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.741916 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.741931 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.741948 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.741965 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:21.741981 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.741997 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.742013 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.742096 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.742107 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:21.742119 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:21.742130 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-03 00:43:21.742141 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-03 00:43:21.742153 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-03 00:43:21.742164 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-03 00:43:21.742175 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:21.742187 | orchestrator | 2026-03-03 00:43:21.742197 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-03 00:43:21.742227 | orchestrator | Tuesday 03 March 2026 00:43:11 +0000 (0:00:00.906) 0:00:44.428 ********* 2026-03-03 00:43:21.742238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:43:21.742249 | orchestrator | 2026-03-03 00:43:21.742259 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-03 00:43:21.742268 | orchestrator | Tuesday 03 March 2026 00:43:13 +0000 (0:00:01.255) 0:00:45.684 ********* 2026-03-03 00:43:21.742278 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:21.742288 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:21.742297 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:21.742307 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:21.742316 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:21.742326 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:21.742335 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:21.742344 | orchestrator | 2026-03-03 00:43:21.742354 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-03 00:43:21.742363 | orchestrator | Tuesday 03 March 2026 00:43:13 +0000 (0:00:00.607) 0:00:46.291 ********* 2026-03-03 00:43:21.742373 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:21.742382 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:21.742392 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:21.742401 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:21.742411 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:21.742420 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:21.742430 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:21.742439 | orchestrator | 2026-03-03 00:43:21.742449 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-03 00:43:21.742458 | orchestrator | Tuesday 03 March 2026 00:43:14 +0000 (0:00:00.786) 0:00:47.078 ********* 2026-03-03 00:43:21.742467 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:21.742488 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:21.742498 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:21.742508 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:21.742517 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:21.742526 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:21.742536 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:21.742545 | orchestrator | 2026-03-03 00:43:21.742555 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-03 00:43:21.742564 | orchestrator | Tuesday 03 March 2026 00:43:15 +0000 (0:00:00.643) 0:00:47.721 ********* 2026-03-03 00:43:21.742574 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:43:21.742583 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:21.742593 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:43:21.742602 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:43:21.742612 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:43:21.742621 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:43:21.742631 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:43:21.742640 | orchestrator | 2026-03-03 00:43:21.742650 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-03 00:43:21.742660 | orchestrator | Tuesday 03 March 2026 00:43:17 +0000 (0:00:01.846) 0:00:49.568 ********* 2026-03-03 00:43:21.742669 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:21.742679 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:43:21.742688 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:43:21.742697 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:43:21.742707 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:43:21.742751 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:43:21.742760 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:43:21.742770 | orchestrator | 2026-03-03 00:43:21.742779 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-03 00:43:21.742796 | orchestrator | Tuesday 03 March 2026 00:43:18 +0000 (0:00:01.069) 0:00:50.637 ********* 2026-03-03 00:43:21.742813 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:21.742831 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:43:21.742846 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:43:21.742862 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:43:21.742878 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:43:21.742895 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:43:21.742911 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:43:21.742927 | orchestrator | 2026-03-03 00:43:21.742942 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-03 00:43:21.742952 | orchestrator | Tuesday 03 March 2026 00:43:20 +0000 (0:00:02.249) 0:00:52.887 ********* 2026-03-03 00:43:21.742961 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:21.742971 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:21.742980 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:21.742990 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:21.742999 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:21.743009 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:21.743018 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:21.743028 | orchestrator | 2026-03-03 00:43:21.743037 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-03 00:43:21.743047 | orchestrator | Tuesday 03 March 2026 00:43:21 +0000 (0:00:00.787) 0:00:53.674 ********* 2026-03-03 00:43:21.743057 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:43:21.743066 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:43:21.743076 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:43:21.743085 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:43:21.743094 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:43:21.743104 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:43:21.743113 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:43:21.743123 | orchestrator | 2026-03-03 00:43:21.743132 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:43:21.743143 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-03 00:43:21.743163 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 00:43:21.743181 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 00:43:22.064836 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 00:43:22.064949 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 00:43:22.064964 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 00:43:22.064976 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 00:43:22.064987 | orchestrator | 2026-03-03 00:43:22.064999 | orchestrator | 2026-03-03 00:43:22.065011 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:43:22.065023 | orchestrator | Tuesday 03 March 2026 00:43:21 +0000 (0:00:00.537) 0:00:54.212 ********* 2026-03-03 00:43:22.065034 | orchestrator | =============================================================================== 2026-03-03 00:43:22.065045 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.16s 2026-03-03 00:43:22.065056 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.11s 2026-03-03 00:43:22.065067 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.23s 2026-03-03 00:43:22.065157 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.57s 2026-03-03 00:43:22.065177 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.41s 2026-03-03 00:43:22.065195 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.25s 2026-03-03 00:43:22.065215 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2026-03-03 00:43:22.065233 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.93s 2026-03-03 00:43:22.065251 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.86s 2026-03-03 00:43:22.065270 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.85s 2026-03-03 00:43:22.065283 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.82s 2026-03-03 00:43:22.065294 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.73s 2026-03-03 00:43:22.065304 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.70s 2026-03-03 00:43:22.065315 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-03-03 00:43:22.065326 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2026-03-03 00:43:22.065337 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.26s 2026-03-03 00:43:22.065348 | orchestrator | osism.commons.network : Create required directories --------------------- 1.15s 2026-03-03 00:43:22.065359 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.09s 2026-03-03 00:43:22.065371 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.07s 2026-03-03 00:43:22.065382 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.05s 2026-03-03 00:43:22.351263 | orchestrator | + osism apply wireguard 2026-03-03 00:43:34.343266 | orchestrator | 2026-03-03 00:43:34 | INFO  | Prepare task for execution of wireguard. 2026-03-03 00:43:34.409895 | orchestrator | 2026-03-03 00:43:34 | INFO  | Task 0726a4f0-18b8-4bf5-bf53-a0166a8ef338 (wireguard) was prepared for execution. 2026-03-03 00:43:34.410081 | orchestrator | 2026-03-03 00:43:34 | INFO  | It takes a moment until task 0726a4f0-18b8-4bf5-bf53-a0166a8ef338 (wireguard) has been started and output is visible here. 2026-03-03 00:43:52.464753 | orchestrator | 2026-03-03 00:43:52.465694 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-03 00:43:52.465717 | orchestrator | 2026-03-03 00:43:52.465723 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-03 00:43:52.465730 | orchestrator | Tuesday 03 March 2026 00:43:38 +0000 (0:00:00.169) 0:00:00.169 ********* 2026-03-03 00:43:52.465735 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:52.465742 | orchestrator | 2026-03-03 00:43:52.465748 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-03 00:43:52.465754 | orchestrator | Tuesday 03 March 2026 00:43:39 +0000 (0:00:01.097) 0:00:01.267 ********* 2026-03-03 00:43:52.465759 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465766 | orchestrator | 2026-03-03 00:43:52.465771 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-03 00:43:52.465777 | orchestrator | Tuesday 03 March 2026 00:43:44 +0000 (0:00:05.683) 0:00:06.950 ********* 2026-03-03 00:43:52.465782 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465787 | orchestrator | 2026-03-03 00:43:52.465793 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-03 00:43:52.465798 | orchestrator | Tuesday 03 March 2026 00:43:45 +0000 (0:00:00.591) 0:00:07.542 ********* 2026-03-03 00:43:52.465804 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465809 | orchestrator | 2026-03-03 00:43:52.465815 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-03 00:43:52.465820 | orchestrator | Tuesday 03 March 2026 00:43:45 +0000 (0:00:00.448) 0:00:07.990 ********* 2026-03-03 00:43:52.465826 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:52.465831 | orchestrator | 2026-03-03 00:43:52.465836 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-03 00:43:52.465842 | orchestrator | Tuesday 03 March 2026 00:43:46 +0000 (0:00:00.670) 0:00:08.661 ********* 2026-03-03 00:43:52.465847 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:52.465853 | orchestrator | 2026-03-03 00:43:52.465858 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-03 00:43:52.465864 | orchestrator | Tuesday 03 March 2026 00:43:46 +0000 (0:00:00.411) 0:00:09.072 ********* 2026-03-03 00:43:52.465869 | orchestrator | ok: [testbed-manager] 2026-03-03 00:43:52.465874 | orchestrator | 2026-03-03 00:43:52.465880 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-03 00:43:52.465885 | orchestrator | Tuesday 03 March 2026 00:43:47 +0000 (0:00:00.419) 0:00:09.491 ********* 2026-03-03 00:43:52.465891 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465897 | orchestrator | 2026-03-03 00:43:52.465902 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-03 00:43:52.465908 | orchestrator | Tuesday 03 March 2026 00:43:48 +0000 (0:00:01.150) 0:00:10.642 ********* 2026-03-03 00:43:52.465913 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-03 00:43:52.465919 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465924 | orchestrator | 2026-03-03 00:43:52.465930 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-03 00:43:52.465935 | orchestrator | Tuesday 03 March 2026 00:43:49 +0000 (0:00:00.921) 0:00:11.564 ********* 2026-03-03 00:43:52.465941 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465946 | orchestrator | 2026-03-03 00:43:52.465951 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-03 00:43:52.465957 | orchestrator | Tuesday 03 March 2026 00:43:51 +0000 (0:00:01.683) 0:00:13.247 ********* 2026-03-03 00:43:52.465962 | orchestrator | changed: [testbed-manager] 2026-03-03 00:43:52.465968 | orchestrator | 2026-03-03 00:43:52.465973 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:43:52.466013 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:43:52.466057 | orchestrator | 2026-03-03 00:43:52.466063 | orchestrator | 2026-03-03 00:43:52.466068 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:43:52.466074 | orchestrator | Tuesday 03 March 2026 00:43:52 +0000 (0:00:00.951) 0:00:14.199 ********* 2026-03-03 00:43:52.466079 | orchestrator | =============================================================================== 2026-03-03 00:43:52.466084 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.68s 2026-03-03 00:43:52.466090 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2026-03-03 00:43:52.466095 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2026-03-03 00:43:52.466101 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.10s 2026-03-03 00:43:52.466106 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-03-03 00:43:52.466111 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-03-03 00:43:52.466117 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.67s 2026-03-03 00:43:52.466122 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-03-03 00:43:52.466127 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-03-03 00:43:52.466136 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-03 00:43:52.466142 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-03-03 00:43:52.754949 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-03 00:43:52.796847 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-03 00:43:52.796950 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-03 00:43:52.872590 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 182 0 --:--:-- --:--:-- --:--:-- 181 100 14 100 14 0 0 181 0 --:--:-- --:--:-- --:--:-- 181 2026-03-03 00:43:52.884606 | orchestrator | + osism apply --environment custom workarounds 2026-03-03 00:43:54.900643 | orchestrator | 2026-03-03 00:43:54 | INFO  | Trying to run play workarounds in environment custom 2026-03-03 00:44:05.005494 | orchestrator | 2026-03-03 00:44:05 | INFO  | Prepare task for execution of workarounds. 2026-03-03 00:44:05.086155 | orchestrator | 2026-03-03 00:44:05 | INFO  | Task caf4baf2-21c2-452e-9de9-f06a034d71ad (workarounds) was prepared for execution. 2026-03-03 00:44:05.086279 | orchestrator | 2026-03-03 00:44:05 | INFO  | It takes a moment until task caf4baf2-21c2-452e-9de9-f06a034d71ad (workarounds) has been started and output is visible here. 2026-03-03 00:44:28.252219 | orchestrator | 2026-03-03 00:44:28.252344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:44:28.252362 | orchestrator | 2026-03-03 00:44:28.252375 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-03 00:44:28.252387 | orchestrator | Tuesday 03 March 2026 00:44:08 +0000 (0:00:00.105) 0:00:00.105 ********* 2026-03-03 00:44:28.252398 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252410 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252421 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252433 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252444 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252455 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252493 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-03 00:44:28.252504 | orchestrator | 2026-03-03 00:44:28.252516 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-03 00:44:28.252527 | orchestrator | 2026-03-03 00:44:28.252538 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-03 00:44:28.252549 | orchestrator | Tuesday 03 March 2026 00:44:09 +0000 (0:00:00.584) 0:00:00.690 ********* 2026-03-03 00:44:28.252560 | orchestrator | ok: [testbed-manager] 2026-03-03 00:44:28.252572 | orchestrator | 2026-03-03 00:44:28.252584 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-03 00:44:28.252595 | orchestrator | 2026-03-03 00:44:28.252611 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-03 00:44:28.252689 | orchestrator | Tuesday 03 March 2026 00:44:11 +0000 (0:00:02.072) 0:00:02.762 ********* 2026-03-03 00:44:28.252709 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:44:28.252725 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:44:28.252741 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:44:28.252759 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:44:28.252778 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:44:28.252796 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:44:28.252814 | orchestrator | 2026-03-03 00:44:28.252833 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-03 00:44:28.252851 | orchestrator | 2026-03-03 00:44:28.252871 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-03 00:44:28.252890 | orchestrator | Tuesday 03 March 2026 00:44:13 +0000 (0:00:01.838) 0:00:04.600 ********* 2026-03-03 00:44:28.252910 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-03 00:44:28.252931 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-03 00:44:28.252950 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-03 00:44:28.252970 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-03 00:44:28.252988 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-03 00:44:28.253006 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-03 00:44:28.253024 | orchestrator | 2026-03-03 00:44:28.253042 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-03 00:44:28.253062 | orchestrator | Tuesday 03 March 2026 00:44:14 +0000 (0:00:01.402) 0:00:06.003 ********* 2026-03-03 00:44:28.253083 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:44:28.253103 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:44:28.253122 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:44:28.253141 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:44:28.253159 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:44:28.253178 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:44:28.253197 | orchestrator | 2026-03-03 00:44:28.253215 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-03 00:44:28.253251 | orchestrator | Tuesday 03 March 2026 00:44:18 +0000 (0:00:03.727) 0:00:09.730 ********* 2026-03-03 00:44:28.253271 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:44:28.253289 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:44:28.253308 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:44:28.253326 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:44:28.253344 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:44:28.253362 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:44:28.253380 | orchestrator | 2026-03-03 00:44:28.253399 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-03 00:44:28.253415 | orchestrator | 2026-03-03 00:44:28.253433 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-03 00:44:28.253468 | orchestrator | Tuesday 03 March 2026 00:44:18 +0000 (0:00:00.603) 0:00:10.333 ********* 2026-03-03 00:44:28.253489 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:44:28.253508 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:44:28.253527 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:44:28.253546 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:44:28.253564 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:44:28.253583 | orchestrator | changed: [testbed-manager] 2026-03-03 00:44:28.253602 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:44:28.253738 | orchestrator | 2026-03-03 00:44:28.253763 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-03 00:44:28.253784 | orchestrator | Tuesday 03 March 2026 00:44:20 +0000 (0:00:01.407) 0:00:11.741 ********* 2026-03-03 00:44:28.253803 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:44:28.253821 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:44:28.253840 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:44:28.253857 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:44:28.253873 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:44:28.253891 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:44:28.253937 | orchestrator | changed: [testbed-manager] 2026-03-03 00:44:28.253959 | orchestrator | 2026-03-03 00:44:28.253978 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-03 00:44:28.253997 | orchestrator | Tuesday 03 March 2026 00:44:21 +0000 (0:00:01.401) 0:00:13.142 ********* 2026-03-03 00:44:28.254015 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:44:28.254124 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:44:28.254142 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:44:28.254158 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:44:28.254175 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:44:28.254191 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:44:28.254207 | orchestrator | ok: [testbed-manager] 2026-03-03 00:44:28.254223 | orchestrator | 2026-03-03 00:44:28.254239 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-03 00:44:28.254256 | orchestrator | Tuesday 03 March 2026 00:44:23 +0000 (0:00:01.418) 0:00:14.560 ********* 2026-03-03 00:44:28.254273 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:44:28.254289 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:44:28.254304 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:44:28.254321 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:44:28.254338 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:44:28.254354 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:44:28.254370 | orchestrator | changed: [testbed-manager] 2026-03-03 00:44:28.254386 | orchestrator | 2026-03-03 00:44:28.254403 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-03 00:44:28.254419 | orchestrator | Tuesday 03 March 2026 00:44:24 +0000 (0:00:01.615) 0:00:16.176 ********* 2026-03-03 00:44:28.254435 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:44:28.254452 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:44:28.254468 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:44:28.254484 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:44:28.254500 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:44:28.254516 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:44:28.254526 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:44:28.254535 | orchestrator | 2026-03-03 00:44:28.254545 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-03 00:44:28.254555 | orchestrator | 2026-03-03 00:44:28.254564 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-03 00:44:28.254574 | orchestrator | Tuesday 03 March 2026 00:44:25 +0000 (0:00:00.583) 0:00:16.759 ********* 2026-03-03 00:44:28.254584 | orchestrator | ok: [testbed-manager] 2026-03-03 00:44:28.254593 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:44:28.254603 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:44:28.254612 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:44:28.254666 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:44:28.254683 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:44:28.254699 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:44:28.254715 | orchestrator | 2026-03-03 00:44:28.254728 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:44:28.254740 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:44:28.254751 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:28.254762 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:28.254771 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:28.254786 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:28.254802 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:28.254829 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:28.254845 | orchestrator | 2026-03-03 00:44:28.254862 | orchestrator | 2026-03-03 00:44:28.254879 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:44:28.254896 | orchestrator | Tuesday 03 March 2026 00:44:28 +0000 (0:00:02.942) 0:00:19.702 ********* 2026-03-03 00:44:28.254907 | orchestrator | =============================================================================== 2026-03-03 00:44:28.254916 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.73s 2026-03-03 00:44:28.254926 | orchestrator | Install python3-docker -------------------------------------------------- 2.94s 2026-03-03 00:44:28.254936 | orchestrator | Apply netplan configuration --------------------------------------------- 2.07s 2026-03-03 00:44:28.254945 | orchestrator | Apply netplan configuration --------------------------------------------- 1.84s 2026-03-03 00:44:28.254955 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.62s 2026-03-03 00:44:28.254964 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.42s 2026-03-03 00:44:28.254974 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.41s 2026-03-03 00:44:28.254983 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.40s 2026-03-03 00:44:28.254993 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.40s 2026-03-03 00:44:28.255003 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.60s 2026-03-03 00:44:28.255012 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.58s 2026-03-03 00:44:28.255035 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.58s 2026-03-03 00:44:28.631196 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-03 00:44:40.500984 | orchestrator | 2026-03-03 00:44:40 | INFO  | Prepare task for execution of reboot. 2026-03-03 00:44:40.566098 | orchestrator | 2026-03-03 00:44:40 | INFO  | Task f2ea2baf-fec2-4b18-95ed-9098373a53b7 (reboot) was prepared for execution. 2026-03-03 00:44:40.566199 | orchestrator | 2026-03-03 00:44:40 | INFO  | It takes a moment until task f2ea2baf-fec2-4b18-95ed-9098373a53b7 (reboot) has been started and output is visible here. 2026-03-03 00:44:50.208245 | orchestrator | 2026-03-03 00:44:50.208346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-03 00:44:50.208379 | orchestrator | 2026-03-03 00:44:50.208385 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-03 00:44:50.208392 | orchestrator | Tuesday 03 March 2026 00:44:44 +0000 (0:00:00.187) 0:00:00.187 ********* 2026-03-03 00:44:50.208398 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:44:50.208405 | orchestrator | 2026-03-03 00:44:50.208411 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-03 00:44:50.208417 | orchestrator | Tuesday 03 March 2026 00:44:44 +0000 (0:00:00.091) 0:00:00.278 ********* 2026-03-03 00:44:50.208423 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:44:50.208429 | orchestrator | 2026-03-03 00:44:50.208435 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-03 00:44:50.208441 | orchestrator | Tuesday 03 March 2026 00:44:45 +0000 (0:00:00.946) 0:00:01.225 ********* 2026-03-03 00:44:50.208447 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:44:50.208452 | orchestrator | 2026-03-03 00:44:50.208458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-03 00:44:50.208464 | orchestrator | 2026-03-03 00:44:50.208470 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-03 00:44:50.208476 | orchestrator | Tuesday 03 March 2026 00:44:45 +0000 (0:00:00.099) 0:00:01.325 ********* 2026-03-03 00:44:50.208482 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:44:50.208487 | orchestrator | 2026-03-03 00:44:50.208493 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-03 00:44:50.208499 | orchestrator | Tuesday 03 March 2026 00:44:45 +0000 (0:00:00.092) 0:00:01.417 ********* 2026-03-03 00:44:50.208505 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:44:50.208511 | orchestrator | 2026-03-03 00:44:50.208517 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-03 00:44:50.208523 | orchestrator | Tuesday 03 March 2026 00:44:46 +0000 (0:00:00.674) 0:00:02.091 ********* 2026-03-03 00:44:50.208528 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:44:50.208534 | orchestrator | 2026-03-03 00:44:50.208540 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-03 00:44:50.208546 | orchestrator | 2026-03-03 00:44:50.208552 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-03 00:44:50.208557 | orchestrator | Tuesday 03 March 2026 00:44:46 +0000 (0:00:00.098) 0:00:02.189 ********* 2026-03-03 00:44:50.208563 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:44:50.208569 | orchestrator | 2026-03-03 00:44:50.208575 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-03 00:44:50.208581 | orchestrator | Tuesday 03 March 2026 00:44:46 +0000 (0:00:00.170) 0:00:02.360 ********* 2026-03-03 00:44:50.208658 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:44:50.208665 | orchestrator | 2026-03-03 00:44:50.208672 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-03 00:44:50.208678 | orchestrator | Tuesday 03 March 2026 00:44:47 +0000 (0:00:00.717) 0:00:03.077 ********* 2026-03-03 00:44:50.208685 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:44:50.208692 | orchestrator | 2026-03-03 00:44:50.208699 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-03 00:44:50.208705 | orchestrator | 2026-03-03 00:44:50.208711 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-03 00:44:50.208718 | orchestrator | Tuesday 03 March 2026 00:44:47 +0000 (0:00:00.138) 0:00:03.216 ********* 2026-03-03 00:44:50.208725 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:44:50.208732 | orchestrator | 2026-03-03 00:44:50.208752 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-03 00:44:50.208759 | orchestrator | Tuesday 03 March 2026 00:44:47 +0000 (0:00:00.091) 0:00:03.307 ********* 2026-03-03 00:44:50.208766 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:44:50.208773 | orchestrator | 2026-03-03 00:44:50.208780 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-03 00:44:50.208786 | orchestrator | Tuesday 03 March 2026 00:44:48 +0000 (0:00:00.654) 0:00:03.962 ********* 2026-03-03 00:44:50.208799 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:44:50.208807 | orchestrator | 2026-03-03 00:44:50.208813 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-03 00:44:50.208821 | orchestrator | 2026-03-03 00:44:50.208828 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-03 00:44:50.208835 | orchestrator | Tuesday 03 March 2026 00:44:48 +0000 (0:00:00.100) 0:00:04.062 ********* 2026-03-03 00:44:50.208842 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:44:50.208848 | orchestrator | 2026-03-03 00:44:50.208855 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-03 00:44:50.208862 | orchestrator | Tuesday 03 March 2026 00:44:48 +0000 (0:00:00.098) 0:00:04.161 ********* 2026-03-03 00:44:50.208869 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:44:50.208876 | orchestrator | 2026-03-03 00:44:50.208883 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-03 00:44:50.208890 | orchestrator | Tuesday 03 March 2026 00:44:49 +0000 (0:00:00.609) 0:00:04.771 ********* 2026-03-03 00:44:50.208897 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:44:50.208903 | orchestrator | 2026-03-03 00:44:50.208910 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-03 00:44:50.208917 | orchestrator | 2026-03-03 00:44:50.208924 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-03 00:44:50.208931 | orchestrator | Tuesday 03 March 2026 00:44:49 +0000 (0:00:00.115) 0:00:04.886 ********* 2026-03-03 00:44:50.208937 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:44:50.208945 | orchestrator | 2026-03-03 00:44:50.208951 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-03 00:44:50.208959 | orchestrator | Tuesday 03 March 2026 00:44:49 +0000 (0:00:00.092) 0:00:04.979 ********* 2026-03-03 00:44:50.208966 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:44:50.208972 | orchestrator | 2026-03-03 00:44:50.208978 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-03 00:44:50.208984 | orchestrator | Tuesday 03 March 2026 00:44:49 +0000 (0:00:00.669) 0:00:05.648 ********* 2026-03-03 00:44:50.209009 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:44:50.209017 | orchestrator | 2026-03-03 00:44:50.209024 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:44:50.209032 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:50.209041 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:50.209048 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:50.209056 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:50.209063 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:50.209070 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:44:50.209077 | orchestrator | 2026-03-03 00:44:50.209084 | orchestrator | 2026-03-03 00:44:50.209091 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:44:50.209099 | orchestrator | Tuesday 03 March 2026 00:44:49 +0000 (0:00:00.034) 0:00:05.683 ********* 2026-03-03 00:44:50.209106 | orchestrator | =============================================================================== 2026-03-03 00:44:50.209113 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2026-03-03 00:44:50.209125 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2026-03-03 00:44:50.209132 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-03-03 00:44:50.437420 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-03 00:45:02.230953 | orchestrator | 2026-03-03 00:45:02 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-03 00:45:02.305158 | orchestrator | 2026-03-03 00:45:02 | INFO  | Task 52c973f2-3912-4860-b968-d045673d7423 (wait-for-connection) was prepared for execution. 2026-03-03 00:45:02.305376 | orchestrator | 2026-03-03 00:45:02 | INFO  | It takes a moment until task 52c973f2-3912-4860-b968-d045673d7423 (wait-for-connection) has been started and output is visible here. 2026-03-03 00:45:18.251417 | orchestrator | 2026-03-03 00:45:18.251526 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-03 00:45:18.251540 | orchestrator | 2026-03-03 00:45:18.251618 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-03 00:45:18.251630 | orchestrator | Tuesday 03 March 2026 00:45:06 +0000 (0:00:00.208) 0:00:00.208 ********* 2026-03-03 00:45:18.251641 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:45:18.251652 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:45:18.251681 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:45:18.251691 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:45:18.251701 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:45:18.251710 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:45:18.251720 | orchestrator | 2026-03-03 00:45:18.251730 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:45:18.251741 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:18.251752 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:18.251762 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:18.251772 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:18.251782 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:18.251791 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:18.251801 | orchestrator | 2026-03-03 00:45:18.251811 | orchestrator | 2026-03-03 00:45:18.251820 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:45:18.251830 | orchestrator | Tuesday 03 March 2026 00:45:17 +0000 (0:00:11.519) 0:00:11.727 ********* 2026-03-03 00:45:18.251840 | orchestrator | =============================================================================== 2026-03-03 00:45:18.251849 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-03-03 00:45:18.602084 | orchestrator | + osism apply hddtemp 2026-03-03 00:45:30.457808 | orchestrator | 2026-03-03 00:45:30 | INFO  | Prepare task for execution of hddtemp. 2026-03-03 00:45:30.567478 | orchestrator | 2026-03-03 00:45:30 | INFO  | Task 3cd599df-52c5-4a48-a725-b5c51c196d97 (hddtemp) was prepared for execution. 2026-03-03 00:45:30.567628 | orchestrator | 2026-03-03 00:45:30 | INFO  | It takes a moment until task 3cd599df-52c5-4a48-a725-b5c51c196d97 (hddtemp) has been started and output is visible here. 2026-03-03 00:45:59.875928 | orchestrator | 2026-03-03 00:45:59.876036 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-03 00:45:59.876048 | orchestrator | 2026-03-03 00:45:59.876055 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-03 00:45:59.876082 | orchestrator | Tuesday 03 March 2026 00:45:34 +0000 (0:00:00.230) 0:00:00.230 ********* 2026-03-03 00:45:59.876090 | orchestrator | ok: [testbed-manager] 2026-03-03 00:45:59.876098 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:45:59.876104 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:45:59.876110 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:45:59.876117 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:45:59.876124 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:45:59.876130 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:45:59.876136 | orchestrator | 2026-03-03 00:45:59.876143 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-03 00:45:59.876149 | orchestrator | Tuesday 03 March 2026 00:45:35 +0000 (0:00:00.611) 0:00:00.841 ********* 2026-03-03 00:45:59.876156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:45:59.876164 | orchestrator | 2026-03-03 00:45:59.876171 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-03 00:45:59.876177 | orchestrator | Tuesday 03 March 2026 00:45:36 +0000 (0:00:01.134) 0:00:01.976 ********* 2026-03-03 00:45:59.876183 | orchestrator | ok: [testbed-manager] 2026-03-03 00:45:59.876189 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:45:59.876196 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:45:59.876202 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:45:59.876208 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:45:59.876214 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:45:59.876220 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:45:59.876226 | orchestrator | 2026-03-03 00:45:59.876233 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-03 00:45:59.876239 | orchestrator | Tuesday 03 March 2026 00:45:38 +0000 (0:00:02.094) 0:00:04.071 ********* 2026-03-03 00:45:59.876245 | orchestrator | changed: [testbed-manager] 2026-03-03 00:45:59.876252 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:45:59.876259 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:45:59.876265 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:45:59.876271 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:45:59.876277 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:45:59.876283 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:45:59.876289 | orchestrator | 2026-03-03 00:45:59.876296 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-03 00:45:59.876302 | orchestrator | Tuesday 03 March 2026 00:45:39 +0000 (0:00:01.042) 0:00:05.113 ********* 2026-03-03 00:45:59.876308 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:45:59.876314 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:45:59.876320 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:45:59.876326 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:45:59.876332 | orchestrator | ok: [testbed-manager] 2026-03-03 00:45:59.876338 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:45:59.876345 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:45:59.876351 | orchestrator | 2026-03-03 00:45:59.876357 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-03 00:45:59.876363 | orchestrator | Tuesday 03 March 2026 00:45:41 +0000 (0:00:01.738) 0:00:06.852 ********* 2026-03-03 00:45:59.876369 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:45:59.876388 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:45:59.876395 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:45:59.876401 | orchestrator | changed: [testbed-manager] 2026-03-03 00:45:59.876407 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:45:59.876413 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:45:59.876419 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:45:59.876425 | orchestrator | 2026-03-03 00:45:59.876432 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-03 00:45:59.876438 | orchestrator | Tuesday 03 March 2026 00:45:41 +0000 (0:00:00.673) 0:00:07.525 ********* 2026-03-03 00:45:59.876450 | orchestrator | changed: [testbed-manager] 2026-03-03 00:45:59.876456 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:45:59.876464 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:45:59.876471 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:45:59.876479 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:45:59.876486 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:45:59.876493 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:45:59.876591 | orchestrator | 2026-03-03 00:45:59.876603 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-03 00:45:59.876613 | orchestrator | Tuesday 03 March 2026 00:45:56 +0000 (0:00:14.191) 0:00:21.717 ********* 2026-03-03 00:45:59.876623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:45:59.876633 | orchestrator | 2026-03-03 00:45:59.876642 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-03 00:45:59.876651 | orchestrator | Tuesday 03 March 2026 00:45:57 +0000 (0:00:01.397) 0:00:23.114 ********* 2026-03-03 00:45:59.876660 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:45:59.876669 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:45:59.876678 | orchestrator | changed: [testbed-manager] 2026-03-03 00:45:59.876689 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:45:59.876699 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:45:59.876710 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:45:59.876721 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:45:59.876732 | orchestrator | 2026-03-03 00:45:59.876742 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:45:59.876754 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:45:59.876782 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:45:59.876790 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:45:59.876797 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:45:59.876804 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:45:59.876812 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:45:59.876819 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:45:59.876826 | orchestrator | 2026-03-03 00:45:59.876833 | orchestrator | 2026-03-03 00:45:59.876841 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:45:59.876848 | orchestrator | Tuesday 03 March 2026 00:45:59 +0000 (0:00:02.016) 0:00:25.131 ********* 2026-03-03 00:45:59.876855 | orchestrator | =============================================================================== 2026-03-03 00:45:59.876861 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.19s 2026-03-03 00:45:59.876867 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.09s 2026-03-03 00:45:59.876874 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.02s 2026-03-03 00:45:59.876880 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.74s 2026-03-03 00:45:59.876886 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2026-03-03 00:45:59.876902 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.13s 2026-03-03 00:45:59.876909 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.04s 2026-03-03 00:45:59.876915 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.67s 2026-03-03 00:45:59.876921 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-03-03 00:46:00.265169 | orchestrator | ++ semver latest 7.1.1 2026-03-03 00:46:00.329341 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-03 00:46:00.329413 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-03 00:46:00.329421 | orchestrator | + sudo systemctl restart manager.service 2026-03-03 00:46:41.600966 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-03 00:46:41.601075 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-03 00:46:41.601092 | orchestrator | + local max_attempts=60 2026-03-03 00:46:41.601105 | orchestrator | + local name=ceph-ansible 2026-03-03 00:46:41.601116 | orchestrator | + local attempt_num=1 2026-03-03 00:46:41.601664 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:46:41.633563 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:46:41.633635 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:46:41.633642 | orchestrator | + sleep 5 2026-03-03 00:46:46.639348 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:46:46.670947 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:46:46.671058 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:46:46.671074 | orchestrator | + sleep 5 2026-03-03 00:46:51.674168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:46:51.708293 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:46:51.708375 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:46:51.708390 | orchestrator | + sleep 5 2026-03-03 00:46:56.713987 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:46:56.755356 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:46:56.755492 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:46:56.755507 | orchestrator | + sleep 5 2026-03-03 00:47:01.760038 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:01.787994 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:01.788099 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:01.788121 | orchestrator | + sleep 5 2026-03-03 00:47:06.790910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:06.826363 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:06.826555 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:06.826572 | orchestrator | + sleep 5 2026-03-03 00:47:11.830195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:11.864928 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:11.865025 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:11.865040 | orchestrator | + sleep 5 2026-03-03 00:47:16.872209 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:16.895025 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:16.895127 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:16.895140 | orchestrator | + sleep 5 2026-03-03 00:47:21.898250 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:21.932184 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:21.932290 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:21.932306 | orchestrator | + sleep 5 2026-03-03 00:47:26.935528 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:26.967725 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:26.967824 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:26.967835 | orchestrator | + sleep 5 2026-03-03 00:47:31.972137 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:32.002473 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:32.002576 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:32.002589 | orchestrator | + sleep 5 2026-03-03 00:47:37.006280 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:37.042327 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:37.042523 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:37.042554 | orchestrator | + sleep 5 2026-03-03 00:47:42.046597 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:42.069979 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:42.070127 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-03 00:47:42.070142 | orchestrator | + sleep 5 2026-03-03 00:47:47.073791 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-03 00:47:47.108421 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:47.108494 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-03 00:47:47.108500 | orchestrator | + local max_attempts=60 2026-03-03 00:47:47.108505 | orchestrator | + local name=kolla-ansible 2026-03-03 00:47:47.108509 | orchestrator | + local attempt_num=1 2026-03-03 00:47:47.109690 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-03 00:47:47.139149 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:47.139233 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-03 00:47:47.139244 | orchestrator | + local max_attempts=60 2026-03-03 00:47:47.139252 | orchestrator | + local name=osism-ansible 2026-03-03 00:47:47.139259 | orchestrator | + local attempt_num=1 2026-03-03 00:47:47.140100 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-03 00:47:47.177980 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-03 00:47:47.178108 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-03 00:47:47.178119 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-03 00:47:47.343121 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-03 00:47:47.494065 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-03 00:47:47.815579 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-03 00:47:47.815769 | orchestrator | + osism apply gather-facts 2026-03-03 00:47:59.802150 | orchestrator | 2026-03-03 00:47:59 | INFO  | Prepare task for execution of gather-facts. 2026-03-03 00:47:59.861894 | orchestrator | 2026-03-03 00:47:59 | INFO  | Task a30d925d-7892-4079-bc7e-d8289137ac84 (gather-facts) was prepared for execution. 2026-03-03 00:47:59.861983 | orchestrator | 2026-03-03 00:47:59 | INFO  | It takes a moment until task a30d925d-7892-4079-bc7e-d8289137ac84 (gather-facts) has been started and output is visible here. 2026-03-03 00:48:12.974502 | orchestrator | 2026-03-03 00:48:12.974563 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-03 00:48:12.974571 | orchestrator | 2026-03-03 00:48:12.974577 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-03 00:48:12.974582 | orchestrator | Tuesday 03 March 2026 00:48:03 +0000 (0:00:00.199) 0:00:00.199 ********* 2026-03-03 00:48:12.974596 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:48:12.974608 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:48:12.974613 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:48:12.974618 | orchestrator | ok: [testbed-manager] 2026-03-03 00:48:12.974623 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:48:12.974627 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:48:12.974632 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:48:12.974640 | orchestrator | 2026-03-03 00:48:12.974650 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-03 00:48:12.974662 | orchestrator | 2026-03-03 00:48:12.974670 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-03 00:48:12.974678 | orchestrator | Tuesday 03 March 2026 00:48:12 +0000 (0:00:08.416) 0:00:08.615 ********* 2026-03-03 00:48:12.974687 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:48:12.974694 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:48:12.974700 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:48:12.974704 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:48:12.974709 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:48:12.974714 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:48:12.974719 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:48:12.974724 | orchestrator | 2026-03-03 00:48:12.974729 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:48:12.974750 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974755 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974770 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974775 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974780 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974785 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974790 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 00:48:12.974795 | orchestrator | 2026-03-03 00:48:12.974800 | orchestrator | 2026-03-03 00:48:12.974804 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:48:12.974809 | orchestrator | Tuesday 03 March 2026 00:48:12 +0000 (0:00:00.448) 0:00:09.064 ********* 2026-03-03 00:48:12.974814 | orchestrator | =============================================================================== 2026-03-03 00:48:12.974819 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.42s 2026-03-03 00:48:12.974824 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-03 00:48:13.215142 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-03 00:48:13.231975 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-03 00:48:13.242080 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-03 00:48:13.252129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-03 00:48:13.264284 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-03 00:48:13.279519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-03 00:48:13.296095 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-03 00:48:13.314482 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-03 00:48:13.344140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-03 00:48:13.345892 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-03 00:48:13.365800 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-03 00:48:13.382360 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-03 00:48:13.394998 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-03 00:48:13.404950 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-03 00:48:13.413277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-03 00:48:13.424311 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-03 00:48:13.434202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-03 00:48:13.444500 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-03 00:48:13.460970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-03 00:48:13.473100 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-03 00:48:13.490616 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-03 00:48:13.504212 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-03 00:48:13.514791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-03 00:48:13.525073 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-03 00:48:13.626365 | orchestrator | ok: Runtime: 0:25:02.924461 2026-03-03 00:48:13.723296 | 2026-03-03 00:48:13.723440 | TASK [Deploy services] 2026-03-03 00:48:14.256103 | orchestrator | skipping: Conditional result was False 2026-03-03 00:48:14.272983 | 2026-03-03 00:48:14.273146 | TASK [Deploy in a nutshell] 2026-03-03 00:48:14.943551 | orchestrator | + set -e 2026-03-03 00:48:14.945162 | orchestrator | 2026-03-03 00:48:14.945210 | orchestrator | # PULL IMAGES 2026-03-03 00:48:14.945220 | orchestrator | 2026-03-03 00:48:14.945233 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-03 00:48:14.945246 | orchestrator | ++ export INTERACTIVE=false 2026-03-03 00:48:14.945262 | orchestrator | ++ INTERACTIVE=false 2026-03-03 00:48:14.945283 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-03 00:48:14.945329 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-03 00:48:14.945373 | orchestrator | + source /opt/manager-vars.sh 2026-03-03 00:48:14.945385 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-03 00:48:14.945398 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-03 00:48:14.945405 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-03 00:48:14.945415 | orchestrator | ++ CEPH_VERSION=reef 2026-03-03 00:48:14.945422 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-03 00:48:14.945432 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-03 00:48:14.945438 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-03 00:48:14.945447 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-03 00:48:14.945455 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-03 00:48:14.945462 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-03 00:48:14.945467 | orchestrator | ++ export ARA=false 2026-03-03 00:48:14.945472 | orchestrator | ++ ARA=false 2026-03-03 00:48:14.945476 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-03 00:48:14.945480 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-03 00:48:14.945485 | orchestrator | ++ export TEMPEST=true 2026-03-03 00:48:14.945492 | orchestrator | ++ TEMPEST=true 2026-03-03 00:48:14.945498 | orchestrator | ++ export IS_ZUUL=true 2026-03-03 00:48:14.945504 | orchestrator | ++ IS_ZUUL=true 2026-03-03 00:48:14.945511 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:48:14.945518 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.90 2026-03-03 00:48:14.945524 | orchestrator | ++ export EXTERNAL_API=false 2026-03-03 00:48:14.945530 | orchestrator | ++ EXTERNAL_API=false 2026-03-03 00:48:14.945537 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-03 00:48:14.945543 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-03 00:48:14.945550 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-03 00:48:14.945556 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-03 00:48:14.945563 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-03 00:48:14.945569 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-03 00:48:14.945576 | orchestrator | + echo 2026-03-03 00:48:14.945582 | orchestrator | + echo '# PULL IMAGES' 2026-03-03 00:48:14.945589 | orchestrator | + echo 2026-03-03 00:48:14.945603 | orchestrator | ++ semver latest 7.0.0 2026-03-03 00:48:15.002789 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-03 00:48:15.002935 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-03 00:48:15.002962 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-03 00:48:16.782835 | orchestrator | 2026-03-03 00:48:16 | INFO  | Trying to run play pull-images in environment custom 2026-03-03 00:48:26.885076 | orchestrator | 2026-03-03 00:48:26 | INFO  | Prepare task for execution of pull-images. 2026-03-03 00:48:26.970004 | orchestrator | 2026-03-03 00:48:26 | INFO  | Task 1c20137b-1141-4c25-af0a-7e299e87d676 (pull-images) was prepared for execution. 2026-03-03 00:48:26.970241 | orchestrator | 2026-03-03 00:48:26 | INFO  | Task 1c20137b-1141-4c25-af0a-7e299e87d676 is running in background. No more output. Check ARA for logs. 2026-03-03 00:48:29.582156 | orchestrator | 2026-03-03 00:48:29 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-03 00:48:39.590550 | orchestrator | 2026-03-03 00:48:39 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-03 00:48:39.656730 | orchestrator | 2026-03-03 00:48:39 | INFO  | Task 1377631b-0fc7-4989-babe-153bba90fc21 (wipe-partitions) was prepared for execution. 2026-03-03 00:48:39.656833 | orchestrator | 2026-03-03 00:48:39 | INFO  | It takes a moment until task 1377631b-0fc7-4989-babe-153bba90fc21 (wipe-partitions) has been started and output is visible here. 2026-03-03 00:48:52.468681 | orchestrator | 2026-03-03 00:48:52.468785 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-03 00:48:52.468796 | orchestrator | 2026-03-03 00:48:52.468802 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-03 00:48:52.468816 | orchestrator | Tuesday 03 March 2026 00:48:44 +0000 (0:00:00.119) 0:00:00.119 ********* 2026-03-03 00:48:52.468846 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:48:52.468855 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:48:52.468861 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:48:52.468867 | orchestrator | 2026-03-03 00:48:52.468874 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-03 00:48:52.468880 | orchestrator | Tuesday 03 March 2026 00:48:44 +0000 (0:00:00.597) 0:00:00.717 ********* 2026-03-03 00:48:52.468891 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:48:52.468897 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:48:52.468904 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:48:52.468910 | orchestrator | 2026-03-03 00:48:52.468916 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-03 00:48:52.468922 | orchestrator | Tuesday 03 March 2026 00:48:45 +0000 (0:00:00.312) 0:00:01.029 ********* 2026-03-03 00:48:52.468928 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:48:52.468935 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:48:52.468941 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:48:52.468947 | orchestrator | 2026-03-03 00:48:52.468953 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-03 00:48:52.468959 | orchestrator | Tuesday 03 March 2026 00:48:45 +0000 (0:00:00.582) 0:00:01.612 ********* 2026-03-03 00:48:52.468965 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:48:52.468971 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:48:52.468978 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:48:52.468984 | orchestrator | 2026-03-03 00:48:52.468990 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-03 00:48:52.468997 | orchestrator | Tuesday 03 March 2026 00:48:45 +0000 (0:00:00.227) 0:00:01.840 ********* 2026-03-03 00:48:52.469003 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-03 00:48:52.469012 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-03 00:48:52.469018 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-03 00:48:52.469024 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-03 00:48:52.469030 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-03 00:48:52.469036 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-03 00:48:52.469043 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-03 00:48:52.469049 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-03 00:48:52.469055 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-03 00:48:52.469061 | orchestrator | 2026-03-03 00:48:52.469068 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-03 00:48:52.469074 | orchestrator | Tuesday 03 March 2026 00:48:47 +0000 (0:00:01.251) 0:00:03.091 ********* 2026-03-03 00:48:52.469081 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-03 00:48:52.469087 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-03 00:48:52.469093 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-03 00:48:52.469099 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-03 00:48:52.469105 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-03 00:48:52.469112 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-03 00:48:52.469118 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-03 00:48:52.469124 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-03 00:48:52.469130 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-03 00:48:52.469136 | orchestrator | 2026-03-03 00:48:52.469143 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-03 00:48:52.469149 | orchestrator | Tuesday 03 March 2026 00:48:48 +0000 (0:00:01.466) 0:00:04.557 ********* 2026-03-03 00:48:52.469155 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-03 00:48:52.469161 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-03 00:48:52.469167 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-03 00:48:52.469179 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-03 00:48:52.469191 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-03 00:48:52.469198 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-03 00:48:52.469204 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-03 00:48:52.469210 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-03 00:48:52.469216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-03 00:48:52.469222 | orchestrator | 2026-03-03 00:48:52.469230 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-03 00:48:52.469236 | orchestrator | Tuesday 03 March 2026 00:48:50 +0000 (0:00:02.295) 0:00:06.853 ********* 2026-03-03 00:48:52.469243 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:48:52.469250 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:48:52.469257 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:48:52.469263 | orchestrator | 2026-03-03 00:48:52.469269 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-03 00:48:52.469276 | orchestrator | Tuesday 03 March 2026 00:48:51 +0000 (0:00:00.658) 0:00:07.512 ********* 2026-03-03 00:48:52.469304 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:48:52.469311 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:48:52.469317 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:48:52.469324 | orchestrator | 2026-03-03 00:48:52.469330 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:48:52.469338 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:48:52.469346 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:48:52.469366 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:48:52.469374 | orchestrator | 2026-03-03 00:48:52.469380 | orchestrator | 2026-03-03 00:48:52.469387 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:48:52.469394 | orchestrator | Tuesday 03 March 2026 00:48:52 +0000 (0:00:00.676) 0:00:08.188 ********* 2026-03-03 00:48:52.469400 | orchestrator | =============================================================================== 2026-03-03 00:48:52.469407 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.30s 2026-03-03 00:48:52.469413 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.47s 2026-03-03 00:48:52.469419 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2026-03-03 00:48:52.469425 | orchestrator | Request device events from the kernel ----------------------------------- 0.68s 2026-03-03 00:48:52.469431 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-03-03 00:48:52.469438 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-03-03 00:48:52.469446 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-03-03 00:48:52.469451 | orchestrator | Remove all rook related logical devices --------------------------------- 0.31s 2026-03-03 00:48:52.469458 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-03-03 00:49:04.498263 | orchestrator | 2026-03-03 00:49:04 | INFO  | Prepare task for execution of facts. 2026-03-03 00:49:04.562126 | orchestrator | 2026-03-03 00:49:04 | INFO  | Task 5cb7aca9-a942-40dd-a232-aa65bc65c77a (facts) was prepared for execution. 2026-03-03 00:49:04.562218 | orchestrator | 2026-03-03 00:49:04 | INFO  | It takes a moment until task 5cb7aca9-a942-40dd-a232-aa65bc65c77a (facts) has been started and output is visible here. 2026-03-03 00:49:15.645090 | orchestrator | 2026-03-03 00:49:15.645170 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-03 00:49:15.645177 | orchestrator | 2026-03-03 00:49:15.645200 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-03 00:49:15.645204 | orchestrator | Tuesday 03 March 2026 00:49:08 +0000 (0:00:00.258) 0:00:00.258 ********* 2026-03-03 00:49:15.645208 | orchestrator | ok: [testbed-manager] 2026-03-03 00:49:15.645213 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:49:15.645219 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:49:15.645225 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:49:15.645231 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:49:15.645240 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:49:15.645248 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:49:15.645297 | orchestrator | 2026-03-03 00:49:15.645316 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-03 00:49:15.645322 | orchestrator | Tuesday 03 March 2026 00:49:09 +0000 (0:00:00.944) 0:00:01.202 ********* 2026-03-03 00:49:15.645329 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:49:15.645335 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:49:15.645341 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:49:15.645346 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:49:15.645351 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:15.645357 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:15.645363 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:15.645368 | orchestrator | 2026-03-03 00:49:15.645374 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-03 00:49:15.645379 | orchestrator | 2026-03-03 00:49:15.645385 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-03 00:49:15.645392 | orchestrator | Tuesday 03 March 2026 00:49:10 +0000 (0:00:01.245) 0:00:02.448 ********* 2026-03-03 00:49:15.645398 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:49:15.645404 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:49:15.645410 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:49:15.645417 | orchestrator | ok: [testbed-manager] 2026-03-03 00:49:15.645422 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:49:15.645426 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:49:15.645430 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:49:15.645434 | orchestrator | 2026-03-03 00:49:15.645438 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-03 00:49:15.645442 | orchestrator | 2026-03-03 00:49:15.645445 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-03 00:49:15.645450 | orchestrator | Tuesday 03 March 2026 00:49:14 +0000 (0:00:04.136) 0:00:06.585 ********* 2026-03-03 00:49:15.645453 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:49:15.645457 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:49:15.645461 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:49:15.645464 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:49:15.645468 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:15.645472 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:15.645475 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:15.645479 | orchestrator | 2026-03-03 00:49:15.645483 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:49:15.645487 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645492 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645496 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645500 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645504 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645514 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645518 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:49:15.645522 | orchestrator | 2026-03-03 00:49:15.645526 | orchestrator | 2026-03-03 00:49:15.645529 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:49:15.645533 | orchestrator | Tuesday 03 March 2026 00:49:15 +0000 (0:00:00.515) 0:00:07.100 ********* 2026-03-03 00:49:15.645537 | orchestrator | =============================================================================== 2026-03-03 00:49:15.645541 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.14s 2026-03-03 00:49:15.645545 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2026-03-03 00:49:15.645548 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.94s 2026-03-03 00:49:15.645552 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-03 00:49:18.069492 | orchestrator | 2026-03-03 00:49:18 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-03 00:49:18.134783 | orchestrator | 2026-03-03 00:49:18 | INFO  | Task 75769f32-6970-4ff0-ba60-d116822509f5 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-03 00:49:18.134878 | orchestrator | 2026-03-03 00:49:18 | INFO  | It takes a moment until task 75769f32-6970-4ff0-ba60-d116822509f5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-03 00:49:30.218718 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-03 00:49:30.218800 | orchestrator | 2.16.14 2026-03-03 00:49:30.218818 | orchestrator | 2026-03-03 00:49:30.218833 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-03 00:49:30.218840 | orchestrator | 2026-03-03 00:49:30.218847 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-03 00:49:30.218854 | orchestrator | Tuesday 03 March 2026 00:49:22 +0000 (0:00:00.311) 0:00:00.311 ********* 2026-03-03 00:49:30.218861 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-03 00:49:30.218868 | orchestrator | 2026-03-03 00:49:30.218875 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-03 00:49:30.218881 | orchestrator | Tuesday 03 March 2026 00:49:22 +0000 (0:00:00.251) 0:00:00.562 ********* 2026-03-03 00:49:30.218885 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:49:30.218890 | orchestrator | 2026-03-03 00:49:30.218894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.218898 | orchestrator | Tuesday 03 March 2026 00:49:23 +0000 (0:00:00.223) 0:00:00.786 ********* 2026-03-03 00:49:30.218902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-03 00:49:30.218906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-03 00:49:30.218910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-03 00:49:30.218914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-03 00:49:30.218918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-03 00:49:30.218921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-03 00:49:30.218925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-03 00:49:30.218929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-03 00:49:30.218933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-03 00:49:30.218937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-03 00:49:30.218949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-03 00:49:30.218953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-03 00:49:30.218957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-03 00:49:30.218961 | orchestrator | 2026-03-03 00:49:30.218965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.218968 | orchestrator | Tuesday 03 March 2026 00:49:23 +0000 (0:00:00.470) 0:00:01.256 ********* 2026-03-03 00:49:30.218972 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.218976 | orchestrator | 2026-03-03 00:49:30.218980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.218984 | orchestrator | Tuesday 03 March 2026 00:49:23 +0000 (0:00:00.209) 0:00:01.466 ********* 2026-03-03 00:49:30.218988 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.218991 | orchestrator | 2026-03-03 00:49:30.218995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219001 | orchestrator | Tuesday 03 March 2026 00:49:24 +0000 (0:00:00.184) 0:00:01.651 ********* 2026-03-03 00:49:30.219005 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219009 | orchestrator | 2026-03-03 00:49:30.219012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219016 | orchestrator | Tuesday 03 March 2026 00:49:24 +0000 (0:00:00.193) 0:00:01.844 ********* 2026-03-03 00:49:30.219020 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219024 | orchestrator | 2026-03-03 00:49:30.219028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219033 | orchestrator | Tuesday 03 March 2026 00:49:24 +0000 (0:00:00.204) 0:00:02.048 ********* 2026-03-03 00:49:30.219041 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219050 | orchestrator | 2026-03-03 00:49:30.219055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219061 | orchestrator | Tuesday 03 March 2026 00:49:24 +0000 (0:00:00.205) 0:00:02.254 ********* 2026-03-03 00:49:30.219066 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219072 | orchestrator | 2026-03-03 00:49:30.219077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219083 | orchestrator | Tuesday 03 March 2026 00:49:24 +0000 (0:00:00.203) 0:00:02.458 ********* 2026-03-03 00:49:30.219088 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219094 | orchestrator | 2026-03-03 00:49:30.219099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219105 | orchestrator | Tuesday 03 March 2026 00:49:25 +0000 (0:00:00.214) 0:00:02.673 ********* 2026-03-03 00:49:30.219110 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219116 | orchestrator | 2026-03-03 00:49:30.219121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219127 | orchestrator | Tuesday 03 March 2026 00:49:25 +0000 (0:00:00.228) 0:00:02.901 ********* 2026-03-03 00:49:30.219133 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2) 2026-03-03 00:49:30.219139 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2) 2026-03-03 00:49:30.219145 | orchestrator | 2026-03-03 00:49:30.219150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219166 | orchestrator | Tuesday 03 March 2026 00:49:25 +0000 (0:00:00.409) 0:00:03.310 ********* 2026-03-03 00:49:30.219172 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702) 2026-03-03 00:49:30.219178 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702) 2026-03-03 00:49:30.219185 | orchestrator | 2026-03-03 00:49:30.219192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219204 | orchestrator | Tuesday 03 March 2026 00:49:26 +0000 (0:00:00.748) 0:00:04.059 ********* 2026-03-03 00:49:30.219211 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473) 2026-03-03 00:49:30.219217 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473) 2026-03-03 00:49:30.219224 | orchestrator | 2026-03-03 00:49:30.219244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219252 | orchestrator | Tuesday 03 March 2026 00:49:27 +0000 (0:00:00.681) 0:00:04.740 ********* 2026-03-03 00:49:30.219260 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8) 2026-03-03 00:49:30.219270 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8) 2026-03-03 00:49:30.219276 | orchestrator | 2026-03-03 00:49:30.219282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:30.219288 | orchestrator | Tuesday 03 March 2026 00:49:28 +0000 (0:00:00.910) 0:00:05.650 ********* 2026-03-03 00:49:30.219295 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-03 00:49:30.219302 | orchestrator | 2026-03-03 00:49:30.219308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219315 | orchestrator | Tuesday 03 March 2026 00:49:28 +0000 (0:00:00.344) 0:00:05.995 ********* 2026-03-03 00:49:30.219326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-03 00:49:30.219332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-03 00:49:30.219338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-03 00:49:30.219344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-03 00:49:30.219350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-03 00:49:30.219355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-03 00:49:30.219361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-03 00:49:30.219366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-03 00:49:30.219372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-03 00:49:30.219379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-03 00:49:30.219385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-03 00:49:30.219391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-03 00:49:30.219398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-03 00:49:30.219405 | orchestrator | 2026-03-03 00:49:30.219412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219418 | orchestrator | Tuesday 03 March 2026 00:49:28 +0000 (0:00:00.386) 0:00:06.381 ********* 2026-03-03 00:49:30.219424 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219428 | orchestrator | 2026-03-03 00:49:30.219433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219438 | orchestrator | Tuesday 03 March 2026 00:49:28 +0000 (0:00:00.208) 0:00:06.590 ********* 2026-03-03 00:49:30.219442 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219446 | orchestrator | 2026-03-03 00:49:30.219451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219455 | orchestrator | Tuesday 03 March 2026 00:49:29 +0000 (0:00:00.213) 0:00:06.803 ********* 2026-03-03 00:49:30.219459 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219468 | orchestrator | 2026-03-03 00:49:30.219472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219476 | orchestrator | Tuesday 03 March 2026 00:49:29 +0000 (0:00:00.218) 0:00:07.022 ********* 2026-03-03 00:49:30.219481 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219485 | orchestrator | 2026-03-03 00:49:30.219489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219494 | orchestrator | Tuesday 03 March 2026 00:49:29 +0000 (0:00:00.216) 0:00:07.238 ********* 2026-03-03 00:49:30.219498 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219503 | orchestrator | 2026-03-03 00:49:30.219512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219517 | orchestrator | Tuesday 03 March 2026 00:49:29 +0000 (0:00:00.213) 0:00:07.451 ********* 2026-03-03 00:49:30.219521 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219525 | orchestrator | 2026-03-03 00:49:30.219530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:30.219534 | orchestrator | Tuesday 03 March 2026 00:49:30 +0000 (0:00:00.188) 0:00:07.640 ********* 2026-03-03 00:49:30.219538 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:30.219543 | orchestrator | 2026-03-03 00:49:30.219552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:36.972727 | orchestrator | Tuesday 03 March 2026 00:49:30 +0000 (0:00:00.192) 0:00:07.832 ********* 2026-03-03 00:49:36.972811 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.972819 | orchestrator | 2026-03-03 00:49:36.972824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:36.972829 | orchestrator | Tuesday 03 March 2026 00:49:30 +0000 (0:00:00.188) 0:00:08.021 ********* 2026-03-03 00:49:36.972833 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-03 00:49:36.972838 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-03 00:49:36.972856 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-03 00:49:36.972860 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-03 00:49:36.972864 | orchestrator | 2026-03-03 00:49:36.972869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:36.972879 | orchestrator | Tuesday 03 March 2026 00:49:31 +0000 (0:00:00.805) 0:00:08.827 ********* 2026-03-03 00:49:36.972883 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.972887 | orchestrator | 2026-03-03 00:49:36.972891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:36.972895 | orchestrator | Tuesday 03 March 2026 00:49:31 +0000 (0:00:00.165) 0:00:08.993 ********* 2026-03-03 00:49:36.972900 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.972903 | orchestrator | 2026-03-03 00:49:36.972907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:36.972911 | orchestrator | Tuesday 03 March 2026 00:49:31 +0000 (0:00:00.164) 0:00:09.157 ********* 2026-03-03 00:49:36.972921 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.972925 | orchestrator | 2026-03-03 00:49:36.972934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:36.972937 | orchestrator | Tuesday 03 March 2026 00:49:31 +0000 (0:00:00.182) 0:00:09.340 ********* 2026-03-03 00:49:36.972941 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.972945 | orchestrator | 2026-03-03 00:49:36.972949 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-03 00:49:36.972953 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.296) 0:00:09.637 ********* 2026-03-03 00:49:36.972957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-03 00:49:36.972961 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-03 00:49:36.972965 | orchestrator | 2026-03-03 00:49:36.972969 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-03 00:49:36.972973 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.153) 0:00:09.790 ********* 2026-03-03 00:49:36.972991 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.972995 | orchestrator | 2026-03-03 00:49:36.972999 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-03 00:49:36.973003 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.130) 0:00:09.920 ********* 2026-03-03 00:49:36.973007 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973010 | orchestrator | 2026-03-03 00:49:36.973017 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-03 00:49:36.973021 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.115) 0:00:10.035 ********* 2026-03-03 00:49:36.973024 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973028 | orchestrator | 2026-03-03 00:49:36.973032 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-03 00:49:36.973036 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.124) 0:00:10.160 ********* 2026-03-03 00:49:36.973039 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:49:36.973043 | orchestrator | 2026-03-03 00:49:36.973047 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-03 00:49:36.973051 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.134) 0:00:10.295 ********* 2026-03-03 00:49:36.973056 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '896495c2-660d-5a75-b418-75215a0ec973'}}) 2026-03-03 00:49:36.973060 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd486d743-7c4f-58d7-8950-e96875d5f319'}}) 2026-03-03 00:49:36.973064 | orchestrator | 2026-03-03 00:49:36.973068 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-03 00:49:36.973071 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.169) 0:00:10.465 ********* 2026-03-03 00:49:36.973076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '896495c2-660d-5a75-b418-75215a0ec973'}})  2026-03-03 00:49:36.973090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd486d743-7c4f-58d7-8950-e96875d5f319'}})  2026-03-03 00:49:36.973094 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973098 | orchestrator | 2026-03-03 00:49:36.973102 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-03 00:49:36.973105 | orchestrator | Tuesday 03 March 2026 00:49:32 +0000 (0:00:00.150) 0:00:10.615 ********* 2026-03-03 00:49:36.973109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '896495c2-660d-5a75-b418-75215a0ec973'}})  2026-03-03 00:49:36.973113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd486d743-7c4f-58d7-8950-e96875d5f319'}})  2026-03-03 00:49:36.973117 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973121 | orchestrator | 2026-03-03 00:49:36.973125 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-03 00:49:36.973129 | orchestrator | Tuesday 03 March 2026 00:49:33 +0000 (0:00:00.293) 0:00:10.909 ********* 2026-03-03 00:49:36.973132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '896495c2-660d-5a75-b418-75215a0ec973'}})  2026-03-03 00:49:36.973148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd486d743-7c4f-58d7-8950-e96875d5f319'}})  2026-03-03 00:49:36.973152 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973156 | orchestrator | 2026-03-03 00:49:36.973160 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-03 00:49:36.973164 | orchestrator | Tuesday 03 March 2026 00:49:33 +0000 (0:00:00.128) 0:00:11.037 ********* 2026-03-03 00:49:36.973167 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:49:36.973171 | orchestrator | 2026-03-03 00:49:36.973175 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-03 00:49:36.973179 | orchestrator | Tuesday 03 March 2026 00:49:33 +0000 (0:00:00.126) 0:00:11.164 ********* 2026-03-03 00:49:36.973183 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:49:36.973191 | orchestrator | 2026-03-03 00:49:36.973195 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-03 00:49:36.973199 | orchestrator | Tuesday 03 March 2026 00:49:33 +0000 (0:00:00.131) 0:00:11.295 ********* 2026-03-03 00:49:36.973203 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973207 | orchestrator | 2026-03-03 00:49:36.973216 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-03 00:49:36.973238 | orchestrator | Tuesday 03 March 2026 00:49:33 +0000 (0:00:00.139) 0:00:11.435 ********* 2026-03-03 00:49:36.973245 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973251 | orchestrator | 2026-03-03 00:49:36.973257 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-03 00:49:36.973263 | orchestrator | Tuesday 03 March 2026 00:49:33 +0000 (0:00:00.157) 0:00:11.593 ********* 2026-03-03 00:49:36.973269 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973275 | orchestrator | 2026-03-03 00:49:36.973282 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-03 00:49:36.973288 | orchestrator | Tuesday 03 March 2026 00:49:34 +0000 (0:00:00.156) 0:00:11.749 ********* 2026-03-03 00:49:36.973294 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 00:49:36.973299 | orchestrator |  "ceph_osd_devices": { 2026-03-03 00:49:36.973305 | orchestrator |  "sdb": { 2026-03-03 00:49:36.973311 | orchestrator |  "osd_lvm_uuid": "896495c2-660d-5a75-b418-75215a0ec973" 2026-03-03 00:49:36.973317 | orchestrator |  }, 2026-03-03 00:49:36.973323 | orchestrator |  "sdc": { 2026-03-03 00:49:36.973329 | orchestrator |  "osd_lvm_uuid": "d486d743-7c4f-58d7-8950-e96875d5f319" 2026-03-03 00:49:36.973335 | orchestrator |  } 2026-03-03 00:49:36.973341 | orchestrator |  } 2026-03-03 00:49:36.973347 | orchestrator | } 2026-03-03 00:49:36.973353 | orchestrator | 2026-03-03 00:49:36.973359 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-03 00:49:36.973365 | orchestrator | Tuesday 03 March 2026 00:49:34 +0000 (0:00:00.181) 0:00:11.930 ********* 2026-03-03 00:49:36.973371 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973377 | orchestrator | 2026-03-03 00:49:36.973382 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-03 00:49:36.973389 | orchestrator | Tuesday 03 March 2026 00:49:34 +0000 (0:00:00.153) 0:00:12.084 ********* 2026-03-03 00:49:36.973395 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973401 | orchestrator | 2026-03-03 00:49:36.973407 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-03 00:49:36.973413 | orchestrator | Tuesday 03 March 2026 00:49:34 +0000 (0:00:00.119) 0:00:12.204 ********* 2026-03-03 00:49:36.973419 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:49:36.973425 | orchestrator | 2026-03-03 00:49:36.973430 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-03 00:49:36.973436 | orchestrator | Tuesday 03 March 2026 00:49:34 +0000 (0:00:00.117) 0:00:12.321 ********* 2026-03-03 00:49:36.973442 | orchestrator | changed: [testbed-node-3] => { 2026-03-03 00:49:36.973449 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-03 00:49:36.973455 | orchestrator |  "ceph_osd_devices": { 2026-03-03 00:49:36.973462 | orchestrator |  "sdb": { 2026-03-03 00:49:36.973469 | orchestrator |  "osd_lvm_uuid": "896495c2-660d-5a75-b418-75215a0ec973" 2026-03-03 00:49:36.973475 | orchestrator |  }, 2026-03-03 00:49:36.973481 | orchestrator |  "sdc": { 2026-03-03 00:49:36.973488 | orchestrator |  "osd_lvm_uuid": "d486d743-7c4f-58d7-8950-e96875d5f319" 2026-03-03 00:49:36.973494 | orchestrator |  } 2026-03-03 00:49:36.973499 | orchestrator |  }, 2026-03-03 00:49:36.973505 | orchestrator |  "lvm_volumes": [ 2026-03-03 00:49:36.973512 | orchestrator |  { 2026-03-03 00:49:36.973518 | orchestrator |  "data": "osd-block-896495c2-660d-5a75-b418-75215a0ec973", 2026-03-03 00:49:36.973525 | orchestrator |  "data_vg": "ceph-896495c2-660d-5a75-b418-75215a0ec973" 2026-03-03 00:49:36.973541 | orchestrator |  }, 2026-03-03 00:49:36.973548 | orchestrator |  { 2026-03-03 00:49:36.973555 | orchestrator |  "data": "osd-block-d486d743-7c4f-58d7-8950-e96875d5f319", 2026-03-03 00:49:36.973561 | orchestrator |  "data_vg": "ceph-d486d743-7c4f-58d7-8950-e96875d5f319" 2026-03-03 00:49:36.973567 | orchestrator |  } 2026-03-03 00:49:36.973573 | orchestrator |  ] 2026-03-03 00:49:36.973579 | orchestrator |  } 2026-03-03 00:49:36.973585 | orchestrator | } 2026-03-03 00:49:36.973591 | orchestrator | 2026-03-03 00:49:36.973597 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-03 00:49:36.973603 | orchestrator | Tuesday 03 March 2026 00:49:35 +0000 (0:00:00.355) 0:00:12.677 ********* 2026-03-03 00:49:36.973610 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-03 00:49:36.973614 | orchestrator | 2026-03-03 00:49:36.973618 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-03 00:49:36.973622 | orchestrator | 2026-03-03 00:49:36.973626 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-03 00:49:36.973630 | orchestrator | Tuesday 03 March 2026 00:49:36 +0000 (0:00:01.513) 0:00:14.191 ********* 2026-03-03 00:49:36.973634 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-03 00:49:36.973637 | orchestrator | 2026-03-03 00:49:36.973647 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-03 00:49:36.973651 | orchestrator | Tuesday 03 March 2026 00:49:36 +0000 (0:00:00.231) 0:00:14.422 ********* 2026-03-03 00:49:36.973655 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:49:36.973659 | orchestrator | 2026-03-03 00:49:36.973670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.815788 | orchestrator | Tuesday 03 March 2026 00:49:36 +0000 (0:00:00.169) 0:00:14.591 ********* 2026-03-03 00:49:43.815838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-03 00:49:43.815843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-03 00:49:43.815847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-03 00:49:43.815851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-03 00:49:43.815855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-03 00:49:43.815859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-03 00:49:43.815863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-03 00:49:43.815869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-03 00:49:43.815873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-03 00:49:43.815877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-03 00:49:43.815881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-03 00:49:43.815887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-03 00:49:43.815893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-03 00:49:43.815899 | orchestrator | 2026-03-03 00:49:43.815909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.815929 | orchestrator | Tuesday 03 March 2026 00:49:37 +0000 (0:00:00.315) 0:00:14.906 ********* 2026-03-03 00:49:43.815936 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.815942 | orchestrator | 2026-03-03 00:49:43.815947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.815953 | orchestrator | Tuesday 03 March 2026 00:49:37 +0000 (0:00:00.177) 0:00:15.084 ********* 2026-03-03 00:49:43.815973 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.815980 | orchestrator | 2026-03-03 00:49:43.815985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.815991 | orchestrator | Tuesday 03 March 2026 00:49:37 +0000 (0:00:00.157) 0:00:15.241 ********* 2026-03-03 00:49:43.815997 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816002 | orchestrator | 2026-03-03 00:49:43.816008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816014 | orchestrator | Tuesday 03 March 2026 00:49:37 +0000 (0:00:00.156) 0:00:15.397 ********* 2026-03-03 00:49:43.816020 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816025 | orchestrator | 2026-03-03 00:49:43.816031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816036 | orchestrator | Tuesday 03 March 2026 00:49:37 +0000 (0:00:00.171) 0:00:15.569 ********* 2026-03-03 00:49:43.816041 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816047 | orchestrator | 2026-03-03 00:49:43.816053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816058 | orchestrator | Tuesday 03 March 2026 00:49:38 +0000 (0:00:00.450) 0:00:16.020 ********* 2026-03-03 00:49:43.816064 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816070 | orchestrator | 2026-03-03 00:49:43.816076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816082 | orchestrator | Tuesday 03 March 2026 00:49:38 +0000 (0:00:00.218) 0:00:16.239 ********* 2026-03-03 00:49:43.816088 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816094 | orchestrator | 2026-03-03 00:49:43.816100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816106 | orchestrator | Tuesday 03 March 2026 00:49:38 +0000 (0:00:00.151) 0:00:16.391 ********* 2026-03-03 00:49:43.816112 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816119 | orchestrator | 2026-03-03 00:49:43.816125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816138 | orchestrator | Tuesday 03 March 2026 00:49:38 +0000 (0:00:00.160) 0:00:16.551 ********* 2026-03-03 00:49:43.816149 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78) 2026-03-03 00:49:43.816156 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78) 2026-03-03 00:49:43.816162 | orchestrator | 2026-03-03 00:49:43.816177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816187 | orchestrator | Tuesday 03 March 2026 00:49:39 +0000 (0:00:00.395) 0:00:16.947 ********* 2026-03-03 00:49:43.816193 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc) 2026-03-03 00:49:43.816199 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc) 2026-03-03 00:49:43.816205 | orchestrator | 2026-03-03 00:49:43.816242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816247 | orchestrator | Tuesday 03 March 2026 00:49:39 +0000 (0:00:00.392) 0:00:17.339 ********* 2026-03-03 00:49:43.816251 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d) 2026-03-03 00:49:43.816255 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d) 2026-03-03 00:49:43.816258 | orchestrator | 2026-03-03 00:49:43.816262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816276 | orchestrator | Tuesday 03 March 2026 00:49:40 +0000 (0:00:00.334) 0:00:17.674 ********* 2026-03-03 00:49:43.816280 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4) 2026-03-03 00:49:43.816284 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4) 2026-03-03 00:49:43.816288 | orchestrator | 2026-03-03 00:49:43.816298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:43.816305 | orchestrator | Tuesday 03 March 2026 00:49:40 +0000 (0:00:00.377) 0:00:18.052 ********* 2026-03-03 00:49:43.816311 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-03 00:49:43.816317 | orchestrator | 2026-03-03 00:49:43.816322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816329 | orchestrator | Tuesday 03 March 2026 00:49:40 +0000 (0:00:00.297) 0:00:18.349 ********* 2026-03-03 00:49:43.816335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-03 00:49:43.816342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-03 00:49:43.816348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-03 00:49:43.816354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-03 00:49:43.816360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-03 00:49:43.816367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-03 00:49:43.816371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-03 00:49:43.816375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-03 00:49:43.816379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-03 00:49:43.816382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-03 00:49:43.816386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-03 00:49:43.816390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-03 00:49:43.816393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-03 00:49:43.816397 | orchestrator | 2026-03-03 00:49:43.816401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816404 | orchestrator | Tuesday 03 March 2026 00:49:41 +0000 (0:00:00.343) 0:00:18.693 ********* 2026-03-03 00:49:43.816408 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816412 | orchestrator | 2026-03-03 00:49:43.816416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816419 | orchestrator | Tuesday 03 March 2026 00:49:41 +0000 (0:00:00.474) 0:00:19.167 ********* 2026-03-03 00:49:43.816423 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816427 | orchestrator | 2026-03-03 00:49:43.816431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816434 | orchestrator | Tuesday 03 March 2026 00:49:41 +0000 (0:00:00.184) 0:00:19.351 ********* 2026-03-03 00:49:43.816438 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816442 | orchestrator | 2026-03-03 00:49:43.816446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816449 | orchestrator | Tuesday 03 March 2026 00:49:41 +0000 (0:00:00.190) 0:00:19.542 ********* 2026-03-03 00:49:43.816453 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816457 | orchestrator | 2026-03-03 00:49:43.816461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816464 | orchestrator | Tuesday 03 March 2026 00:49:42 +0000 (0:00:00.172) 0:00:19.714 ********* 2026-03-03 00:49:43.816468 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816472 | orchestrator | 2026-03-03 00:49:43.816475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816479 | orchestrator | Tuesday 03 March 2026 00:49:42 +0000 (0:00:00.195) 0:00:19.909 ********* 2026-03-03 00:49:43.816483 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816490 | orchestrator | 2026-03-03 00:49:43.816497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816501 | orchestrator | Tuesday 03 March 2026 00:49:42 +0000 (0:00:00.187) 0:00:20.097 ********* 2026-03-03 00:49:43.816505 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816509 | orchestrator | 2026-03-03 00:49:43.816512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816516 | orchestrator | Tuesday 03 March 2026 00:49:42 +0000 (0:00:00.204) 0:00:20.301 ********* 2026-03-03 00:49:43.816520 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:43.816524 | orchestrator | 2026-03-03 00:49:43.816527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816531 | orchestrator | Tuesday 03 March 2026 00:49:42 +0000 (0:00:00.199) 0:00:20.501 ********* 2026-03-03 00:49:43.816535 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-03 00:49:43.816539 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-03 00:49:43.816543 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-03 00:49:43.816547 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-03 00:49:43.816551 | orchestrator | 2026-03-03 00:49:43.816555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:43.816558 | orchestrator | Tuesday 03 March 2026 00:49:43 +0000 (0:00:00.840) 0:00:21.341 ********* 2026-03-03 00:49:43.816562 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.573831 | orchestrator | 2026-03-03 00:49:50.573896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:50.573909 | orchestrator | Tuesday 03 March 2026 00:49:43 +0000 (0:00:00.150) 0:00:21.492 ********* 2026-03-03 00:49:50.573918 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.573926 | orchestrator | 2026-03-03 00:49:50.573933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:50.573941 | orchestrator | Tuesday 03 March 2026 00:49:44 +0000 (0:00:00.189) 0:00:21.682 ********* 2026-03-03 00:49:50.573950 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.573957 | orchestrator | 2026-03-03 00:49:50.573964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:50.573972 | orchestrator | Tuesday 03 March 2026 00:49:44 +0000 (0:00:00.185) 0:00:21.867 ********* 2026-03-03 00:49:50.573980 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.573988 | orchestrator | 2026-03-03 00:49:50.573995 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-03 00:49:50.574003 | orchestrator | Tuesday 03 March 2026 00:49:44 +0000 (0:00:00.569) 0:00:22.437 ********* 2026-03-03 00:49:50.574051 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-03 00:49:50.574062 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-03 00:49:50.574070 | orchestrator | 2026-03-03 00:49:50.574089 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-03 00:49:50.574099 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.188) 0:00:22.625 ********* 2026-03-03 00:49:50.574109 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574120 | orchestrator | 2026-03-03 00:49:50.574131 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-03 00:49:50.574141 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.174) 0:00:22.800 ********* 2026-03-03 00:49:50.574151 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574161 | orchestrator | 2026-03-03 00:49:50.574171 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-03 00:49:50.574181 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.136) 0:00:22.936 ********* 2026-03-03 00:49:50.574188 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574246 | orchestrator | 2026-03-03 00:49:50.574283 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-03 00:49:50.574290 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.160) 0:00:23.097 ********* 2026-03-03 00:49:50.574315 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:49:50.574323 | orchestrator | 2026-03-03 00:49:50.574330 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-03 00:49:50.574337 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.149) 0:00:23.247 ********* 2026-03-03 00:49:50.574346 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}}) 2026-03-03 00:49:50.574353 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60a17889-adeb-5df5-a11b-dee290996ccf'}}) 2026-03-03 00:49:50.574361 | orchestrator | 2026-03-03 00:49:50.574370 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-03 00:49:50.574377 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.176) 0:00:23.423 ********* 2026-03-03 00:49:50.574386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}})  2026-03-03 00:49:50.574396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60a17889-adeb-5df5-a11b-dee290996ccf'}})  2026-03-03 00:49:50.574405 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574412 | orchestrator | 2026-03-03 00:49:50.574419 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-03 00:49:50.574426 | orchestrator | Tuesday 03 March 2026 00:49:45 +0000 (0:00:00.145) 0:00:23.569 ********* 2026-03-03 00:49:50.574435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}})  2026-03-03 00:49:50.574442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60a17889-adeb-5df5-a11b-dee290996ccf'}})  2026-03-03 00:49:50.574451 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574458 | orchestrator | 2026-03-03 00:49:50.574467 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-03 00:49:50.574476 | orchestrator | Tuesday 03 March 2026 00:49:46 +0000 (0:00:00.164) 0:00:23.733 ********* 2026-03-03 00:49:50.574485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}})  2026-03-03 00:49:50.574493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60a17889-adeb-5df5-a11b-dee290996ccf'}})  2026-03-03 00:49:50.574500 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574508 | orchestrator | 2026-03-03 00:49:50.574529 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-03 00:49:50.574538 | orchestrator | Tuesday 03 March 2026 00:49:46 +0000 (0:00:00.168) 0:00:23.902 ********* 2026-03-03 00:49:50.574544 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:49:50.574551 | orchestrator | 2026-03-03 00:49:50.574561 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-03 00:49:50.574569 | orchestrator | Tuesday 03 March 2026 00:49:46 +0000 (0:00:00.149) 0:00:24.051 ********* 2026-03-03 00:49:50.574576 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:49:50.574584 | orchestrator | 2026-03-03 00:49:50.574591 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-03 00:49:50.574599 | orchestrator | Tuesday 03 March 2026 00:49:46 +0000 (0:00:00.132) 0:00:24.184 ********* 2026-03-03 00:49:50.574626 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574634 | orchestrator | 2026-03-03 00:49:50.574641 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-03 00:49:50.574649 | orchestrator | Tuesday 03 March 2026 00:49:46 +0000 (0:00:00.334) 0:00:24.518 ********* 2026-03-03 00:49:50.574658 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574666 | orchestrator | 2026-03-03 00:49:50.574673 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-03 00:49:50.574680 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.134) 0:00:24.652 ********* 2026-03-03 00:49:50.574689 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574706 | orchestrator | 2026-03-03 00:49:50.574713 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-03 00:49:50.574721 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.121) 0:00:24.774 ********* 2026-03-03 00:49:50.574730 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 00:49:50.574738 | orchestrator |  "ceph_osd_devices": { 2026-03-03 00:49:50.574746 | orchestrator |  "sdb": { 2026-03-03 00:49:50.574754 | orchestrator |  "osd_lvm_uuid": "a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd" 2026-03-03 00:49:50.574762 | orchestrator |  }, 2026-03-03 00:49:50.574770 | orchestrator |  "sdc": { 2026-03-03 00:49:50.574779 | orchestrator |  "osd_lvm_uuid": "60a17889-adeb-5df5-a11b-dee290996ccf" 2026-03-03 00:49:50.574786 | orchestrator |  } 2026-03-03 00:49:50.574793 | orchestrator |  } 2026-03-03 00:49:50.574801 | orchestrator | } 2026-03-03 00:49:50.574810 | orchestrator | 2026-03-03 00:49:50.574818 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-03 00:49:50.574825 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.132) 0:00:24.906 ********* 2026-03-03 00:49:50.574833 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574842 | orchestrator | 2026-03-03 00:49:50.574852 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-03 00:49:50.574860 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.180) 0:00:25.086 ********* 2026-03-03 00:49:50.574867 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574873 | orchestrator | 2026-03-03 00:49:50.574879 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-03 00:49:50.574886 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.154) 0:00:25.240 ********* 2026-03-03 00:49:50.574892 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:49:50.574898 | orchestrator | 2026-03-03 00:49:50.574905 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-03 00:49:50.574912 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.126) 0:00:25.366 ********* 2026-03-03 00:49:50.574919 | orchestrator | changed: [testbed-node-4] => { 2026-03-03 00:49:50.574925 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-03 00:49:50.574932 | orchestrator |  "ceph_osd_devices": { 2026-03-03 00:49:50.574940 | orchestrator |  "sdb": { 2026-03-03 00:49:50.574949 | orchestrator |  "osd_lvm_uuid": "a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd" 2026-03-03 00:49:50.574956 | orchestrator |  }, 2026-03-03 00:49:50.574963 | orchestrator |  "sdc": { 2026-03-03 00:49:50.574971 | orchestrator |  "osd_lvm_uuid": "60a17889-adeb-5df5-a11b-dee290996ccf" 2026-03-03 00:49:50.574980 | orchestrator |  } 2026-03-03 00:49:50.574987 | orchestrator |  }, 2026-03-03 00:49:50.574994 | orchestrator |  "lvm_volumes": [ 2026-03-03 00:49:50.575001 | orchestrator |  { 2026-03-03 00:49:50.575009 | orchestrator |  "data": "osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd", 2026-03-03 00:49:50.575017 | orchestrator |  "data_vg": "ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd" 2026-03-03 00:49:50.575025 | orchestrator |  }, 2026-03-03 00:49:50.575034 | orchestrator |  { 2026-03-03 00:49:50.575041 | orchestrator |  "data": "osd-block-60a17889-adeb-5df5-a11b-dee290996ccf", 2026-03-03 00:49:50.575048 | orchestrator |  "data_vg": "ceph-60a17889-adeb-5df5-a11b-dee290996ccf" 2026-03-03 00:49:50.575056 | orchestrator |  } 2026-03-03 00:49:50.575065 | orchestrator |  ] 2026-03-03 00:49:50.575072 | orchestrator |  } 2026-03-03 00:49:50.575079 | orchestrator | } 2026-03-03 00:49:50.575086 | orchestrator | 2026-03-03 00:49:50.575094 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-03 00:49:50.575101 | orchestrator | Tuesday 03 March 2026 00:49:47 +0000 (0:00:00.206) 0:00:25.572 ********* 2026-03-03 00:49:50.575109 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-03 00:49:50.575116 | orchestrator | 2026-03-03 00:49:50.575131 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-03 00:49:50.575137 | orchestrator | 2026-03-03 00:49:50.575144 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-03 00:49:50.575151 | orchestrator | Tuesday 03 March 2026 00:49:49 +0000 (0:00:01.284) 0:00:26.856 ********* 2026-03-03 00:49:50.575157 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-03 00:49:50.575164 | orchestrator | 2026-03-03 00:49:50.575169 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-03 00:49:50.575175 | orchestrator | Tuesday 03 March 2026 00:49:49 +0000 (0:00:00.670) 0:00:27.527 ********* 2026-03-03 00:49:50.575181 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:49:50.575187 | orchestrator | 2026-03-03 00:49:50.575193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:50.575214 | orchestrator | Tuesday 03 March 2026 00:49:50 +0000 (0:00:00.295) 0:00:27.822 ********* 2026-03-03 00:49:50.575221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-03 00:49:50.575228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-03 00:49:50.575234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-03 00:49:50.575240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-03 00:49:50.575245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-03 00:49:50.575258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-03 00:49:58.185335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-03 00:49:58.185423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-03 00:49:58.185432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-03 00:49:58.185439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-03 00:49:58.185462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-03 00:49:58.185468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-03 00:49:58.185475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-03 00:49:58.185481 | orchestrator | 2026-03-03 00:49:58.185488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185496 | orchestrator | Tuesday 03 March 2026 00:49:50 +0000 (0:00:00.443) 0:00:28.266 ********* 2026-03-03 00:49:58.185502 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185510 | orchestrator | 2026-03-03 00:49:58.185516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185522 | orchestrator | Tuesday 03 March 2026 00:49:50 +0000 (0:00:00.190) 0:00:28.456 ********* 2026-03-03 00:49:58.185528 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185534 | orchestrator | 2026-03-03 00:49:58.185540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185547 | orchestrator | Tuesday 03 March 2026 00:49:51 +0000 (0:00:00.215) 0:00:28.672 ********* 2026-03-03 00:49:58.185552 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185558 | orchestrator | 2026-03-03 00:49:58.185564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185570 | orchestrator | Tuesday 03 March 2026 00:49:51 +0000 (0:00:00.226) 0:00:28.898 ********* 2026-03-03 00:49:58.185580 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185586 | orchestrator | 2026-03-03 00:49:58.185593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185599 | orchestrator | Tuesday 03 March 2026 00:49:51 +0000 (0:00:00.211) 0:00:29.110 ********* 2026-03-03 00:49:58.185660 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185668 | orchestrator | 2026-03-03 00:49:58.185675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185681 | orchestrator | Tuesday 03 March 2026 00:49:51 +0000 (0:00:00.199) 0:00:29.309 ********* 2026-03-03 00:49:58.185687 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185693 | orchestrator | 2026-03-03 00:49:58.185700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185706 | orchestrator | Tuesday 03 March 2026 00:49:51 +0000 (0:00:00.188) 0:00:29.497 ********* 2026-03-03 00:49:58.185712 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185718 | orchestrator | 2026-03-03 00:49:58.185724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185730 | orchestrator | Tuesday 03 March 2026 00:49:52 +0000 (0:00:00.186) 0:00:29.684 ********* 2026-03-03 00:49:58.185737 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.185743 | orchestrator | 2026-03-03 00:49:58.185749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185756 | orchestrator | Tuesday 03 March 2026 00:49:52 +0000 (0:00:00.165) 0:00:29.849 ********* 2026-03-03 00:49:58.185762 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8) 2026-03-03 00:49:58.185769 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8) 2026-03-03 00:49:58.185775 | orchestrator | 2026-03-03 00:49:58.185781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185787 | orchestrator | Tuesday 03 March 2026 00:49:52 +0000 (0:00:00.650) 0:00:30.500 ********* 2026-03-03 00:49:58.185794 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361) 2026-03-03 00:49:58.185800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361) 2026-03-03 00:49:58.185805 | orchestrator | 2026-03-03 00:49:58.185811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185817 | orchestrator | Tuesday 03 March 2026 00:49:53 +0000 (0:00:00.418) 0:00:30.918 ********* 2026-03-03 00:49:58.185823 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301) 2026-03-03 00:49:58.185829 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301) 2026-03-03 00:49:58.185837 | orchestrator | 2026-03-03 00:49:58.185843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185849 | orchestrator | Tuesday 03 March 2026 00:49:53 +0000 (0:00:00.381) 0:00:31.300 ********* 2026-03-03 00:49:58.185855 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53) 2026-03-03 00:49:58.185862 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53) 2026-03-03 00:49:58.185868 | orchestrator | 2026-03-03 00:49:58.185875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:49:58.185881 | orchestrator | Tuesday 03 March 2026 00:49:54 +0000 (0:00:00.398) 0:00:31.698 ********* 2026-03-03 00:49:58.185888 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-03 00:49:58.185894 | orchestrator | 2026-03-03 00:49:58.185900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.185923 | orchestrator | Tuesday 03 March 2026 00:49:54 +0000 (0:00:00.509) 0:00:32.207 ********* 2026-03-03 00:49:58.185931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-03 00:49:58.185937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-03 00:49:58.185944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-03 00:49:58.185950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-03 00:49:58.185964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-03 00:49:58.185970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-03 00:49:58.185977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-03 00:49:58.185984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-03 00:49:58.185990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-03 00:49:58.185996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-03 00:49:58.186002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-03 00:49:58.186008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-03 00:49:58.186072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-03 00:49:58.186080 | orchestrator | 2026-03-03 00:49:58.186087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186095 | orchestrator | Tuesday 03 March 2026 00:49:54 +0000 (0:00:00.383) 0:00:32.591 ********* 2026-03-03 00:49:58.186102 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186109 | orchestrator | 2026-03-03 00:49:58.186116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186124 | orchestrator | Tuesday 03 March 2026 00:49:55 +0000 (0:00:00.259) 0:00:32.850 ********* 2026-03-03 00:49:58.186131 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186138 | orchestrator | 2026-03-03 00:49:58.186145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186152 | orchestrator | Tuesday 03 March 2026 00:49:55 +0000 (0:00:00.202) 0:00:33.053 ********* 2026-03-03 00:49:58.186159 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186166 | orchestrator | 2026-03-03 00:49:58.186174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186207 | orchestrator | Tuesday 03 March 2026 00:49:55 +0000 (0:00:00.182) 0:00:33.236 ********* 2026-03-03 00:49:58.186214 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186220 | orchestrator | 2026-03-03 00:49:58.186226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186232 | orchestrator | Tuesday 03 March 2026 00:49:55 +0000 (0:00:00.182) 0:00:33.418 ********* 2026-03-03 00:49:58.186238 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186245 | orchestrator | 2026-03-03 00:49:58.186251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186257 | orchestrator | Tuesday 03 March 2026 00:49:55 +0000 (0:00:00.190) 0:00:33.608 ********* 2026-03-03 00:49:58.186263 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186269 | orchestrator | 2026-03-03 00:49:58.186275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186281 | orchestrator | Tuesday 03 March 2026 00:49:56 +0000 (0:00:00.469) 0:00:34.078 ********* 2026-03-03 00:49:58.186287 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186294 | orchestrator | 2026-03-03 00:49:58.186300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186306 | orchestrator | Tuesday 03 March 2026 00:49:56 +0000 (0:00:00.181) 0:00:34.259 ********* 2026-03-03 00:49:58.186312 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186318 | orchestrator | 2026-03-03 00:49:58.186324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186330 | orchestrator | Tuesday 03 March 2026 00:49:56 +0000 (0:00:00.184) 0:00:34.444 ********* 2026-03-03 00:49:58.186337 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-03 00:49:58.186351 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-03 00:49:58.186357 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-03 00:49:58.186363 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-03 00:49:58.186369 | orchestrator | 2026-03-03 00:49:58.186376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186382 | orchestrator | Tuesday 03 March 2026 00:49:57 +0000 (0:00:00.566) 0:00:35.010 ********* 2026-03-03 00:49:58.186388 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186394 | orchestrator | 2026-03-03 00:49:58.186400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186406 | orchestrator | Tuesday 03 March 2026 00:49:57 +0000 (0:00:00.182) 0:00:35.192 ********* 2026-03-03 00:49:58.186412 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186418 | orchestrator | 2026-03-03 00:49:58.186424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186431 | orchestrator | Tuesday 03 March 2026 00:49:57 +0000 (0:00:00.189) 0:00:35.382 ********* 2026-03-03 00:49:58.186437 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186443 | orchestrator | 2026-03-03 00:49:58.186449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:49:58.186455 | orchestrator | Tuesday 03 March 2026 00:49:57 +0000 (0:00:00.203) 0:00:35.585 ********* 2026-03-03 00:49:58.186462 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:49:58.186468 | orchestrator | 2026-03-03 00:49:58.186482 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-03 00:50:02.623820 | orchestrator | Tuesday 03 March 2026 00:49:58 +0000 (0:00:00.217) 0:00:35.803 ********* 2026-03-03 00:50:02.623886 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-03 00:50:02.623895 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-03 00:50:02.623903 | orchestrator | 2026-03-03 00:50:02.623911 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-03 00:50:02.623919 | orchestrator | Tuesday 03 March 2026 00:49:58 +0000 (0:00:00.180) 0:00:35.983 ********* 2026-03-03 00:50:02.623925 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.623932 | orchestrator | 2026-03-03 00:50:02.623939 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-03 00:50:02.623946 | orchestrator | Tuesday 03 March 2026 00:49:58 +0000 (0:00:00.140) 0:00:36.124 ********* 2026-03-03 00:50:02.623952 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.623959 | orchestrator | 2026-03-03 00:50:02.623966 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-03 00:50:02.623973 | orchestrator | Tuesday 03 March 2026 00:49:58 +0000 (0:00:00.145) 0:00:36.269 ********* 2026-03-03 00:50:02.623981 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.623988 | orchestrator | 2026-03-03 00:50:02.623995 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-03 00:50:02.624002 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.386) 0:00:36.655 ********* 2026-03-03 00:50:02.624010 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:50:02.624017 | orchestrator | 2026-03-03 00:50:02.624024 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-03 00:50:02.624031 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.149) 0:00:36.805 ********* 2026-03-03 00:50:02.624038 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7865f1e-8b85-57a7-a15d-91986b577cab'}}) 2026-03-03 00:50:02.624046 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b901fd44-5489-5e25-a5fe-b820905f87a1'}}) 2026-03-03 00:50:02.624052 | orchestrator | 2026-03-03 00:50:02.624059 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-03 00:50:02.624066 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.151) 0:00:36.957 ********* 2026-03-03 00:50:02.624074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7865f1e-8b85-57a7-a15d-91986b577cab'}})  2026-03-03 00:50:02.624102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b901fd44-5489-5e25-a5fe-b820905f87a1'}})  2026-03-03 00:50:02.624111 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624117 | orchestrator | 2026-03-03 00:50:02.624124 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-03 00:50:02.624132 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.158) 0:00:37.116 ********* 2026-03-03 00:50:02.624140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7865f1e-8b85-57a7-a15d-91986b577cab'}})  2026-03-03 00:50:02.624148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b901fd44-5489-5e25-a5fe-b820905f87a1'}})  2026-03-03 00:50:02.624154 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624161 | orchestrator | 2026-03-03 00:50:02.624168 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-03 00:50:02.624175 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.150) 0:00:37.267 ********* 2026-03-03 00:50:02.624197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7865f1e-8b85-57a7-a15d-91986b577cab'}})  2026-03-03 00:50:02.624204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b901fd44-5489-5e25-a5fe-b820905f87a1'}})  2026-03-03 00:50:02.624211 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624217 | orchestrator | 2026-03-03 00:50:02.624224 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-03 00:50:02.624230 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.155) 0:00:37.422 ********* 2026-03-03 00:50:02.624237 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:50:02.624243 | orchestrator | 2026-03-03 00:50:02.624250 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-03 00:50:02.624256 | orchestrator | Tuesday 03 March 2026 00:49:59 +0000 (0:00:00.148) 0:00:37.571 ********* 2026-03-03 00:50:02.624263 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:50:02.624269 | orchestrator | 2026-03-03 00:50:02.624276 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-03 00:50:02.624282 | orchestrator | Tuesday 03 March 2026 00:50:00 +0000 (0:00:00.143) 0:00:37.714 ********* 2026-03-03 00:50:02.624289 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624295 | orchestrator | 2026-03-03 00:50:02.624302 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-03 00:50:02.624309 | orchestrator | Tuesday 03 March 2026 00:50:00 +0000 (0:00:00.160) 0:00:37.875 ********* 2026-03-03 00:50:02.624316 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624323 | orchestrator | 2026-03-03 00:50:02.624330 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-03 00:50:02.624337 | orchestrator | Tuesday 03 March 2026 00:50:00 +0000 (0:00:00.150) 0:00:38.025 ********* 2026-03-03 00:50:02.624345 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624351 | orchestrator | 2026-03-03 00:50:02.624359 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-03 00:50:02.624366 | orchestrator | Tuesday 03 March 2026 00:50:00 +0000 (0:00:00.143) 0:00:38.168 ********* 2026-03-03 00:50:02.624373 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 00:50:02.624381 | orchestrator |  "ceph_osd_devices": { 2026-03-03 00:50:02.624388 | orchestrator |  "sdb": { 2026-03-03 00:50:02.624410 | orchestrator |  "osd_lvm_uuid": "f7865f1e-8b85-57a7-a15d-91986b577cab" 2026-03-03 00:50:02.624419 | orchestrator |  }, 2026-03-03 00:50:02.624427 | orchestrator |  "sdc": { 2026-03-03 00:50:02.624447 | orchestrator |  "osd_lvm_uuid": "b901fd44-5489-5e25-a5fe-b820905f87a1" 2026-03-03 00:50:02.624455 | orchestrator |  } 2026-03-03 00:50:02.624463 | orchestrator |  } 2026-03-03 00:50:02.624471 | orchestrator | } 2026-03-03 00:50:02.624478 | orchestrator | 2026-03-03 00:50:02.624494 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-03 00:50:02.624503 | orchestrator | Tuesday 03 March 2026 00:50:00 +0000 (0:00:00.145) 0:00:38.314 ********* 2026-03-03 00:50:02.624511 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624519 | orchestrator | 2026-03-03 00:50:02.624526 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-03 00:50:02.624534 | orchestrator | Tuesday 03 March 2026 00:50:01 +0000 (0:00:00.381) 0:00:38.696 ********* 2026-03-03 00:50:02.624542 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624549 | orchestrator | 2026-03-03 00:50:02.624557 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-03 00:50:02.624565 | orchestrator | Tuesday 03 March 2026 00:50:01 +0000 (0:00:00.138) 0:00:38.834 ********* 2026-03-03 00:50:02.624573 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:50:02.624581 | orchestrator | 2026-03-03 00:50:02.624589 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-03 00:50:02.624596 | orchestrator | Tuesday 03 March 2026 00:50:01 +0000 (0:00:00.147) 0:00:38.982 ********* 2026-03-03 00:50:02.624604 | orchestrator | changed: [testbed-node-5] => { 2026-03-03 00:50:02.624612 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-03 00:50:02.624620 | orchestrator |  "ceph_osd_devices": { 2026-03-03 00:50:02.624628 | orchestrator |  "sdb": { 2026-03-03 00:50:02.624636 | orchestrator |  "osd_lvm_uuid": "f7865f1e-8b85-57a7-a15d-91986b577cab" 2026-03-03 00:50:02.624644 | orchestrator |  }, 2026-03-03 00:50:02.624651 | orchestrator |  "sdc": { 2026-03-03 00:50:02.624662 | orchestrator |  "osd_lvm_uuid": "b901fd44-5489-5e25-a5fe-b820905f87a1" 2026-03-03 00:50:02.624669 | orchestrator |  } 2026-03-03 00:50:02.624677 | orchestrator |  }, 2026-03-03 00:50:02.624684 | orchestrator |  "lvm_volumes": [ 2026-03-03 00:50:02.624692 | orchestrator |  { 2026-03-03 00:50:02.624700 | orchestrator |  "data": "osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab", 2026-03-03 00:50:02.624708 | orchestrator |  "data_vg": "ceph-f7865f1e-8b85-57a7-a15d-91986b577cab" 2026-03-03 00:50:02.624715 | orchestrator |  }, 2026-03-03 00:50:02.624726 | orchestrator |  { 2026-03-03 00:50:02.624734 | orchestrator |  "data": "osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1", 2026-03-03 00:50:02.624741 | orchestrator |  "data_vg": "ceph-b901fd44-5489-5e25-a5fe-b820905f87a1" 2026-03-03 00:50:02.624749 | orchestrator |  } 2026-03-03 00:50:02.624757 | orchestrator |  ] 2026-03-03 00:50:02.624765 | orchestrator |  } 2026-03-03 00:50:02.624773 | orchestrator | } 2026-03-03 00:50:02.624779 | orchestrator | 2026-03-03 00:50:02.624787 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-03 00:50:02.624794 | orchestrator | Tuesday 03 March 2026 00:50:01 +0000 (0:00:00.223) 0:00:39.205 ********* 2026-03-03 00:50:02.624801 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-03 00:50:02.624809 | orchestrator | 2026-03-03 00:50:02.624817 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:50:02.624825 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 00:50:02.624834 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 00:50:02.624841 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 00:50:02.624849 | orchestrator | 2026-03-03 00:50:02.624856 | orchestrator | 2026-03-03 00:50:02.624864 | orchestrator | 2026-03-03 00:50:02.624871 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:50:02.624879 | orchestrator | Tuesday 03 March 2026 00:50:02 +0000 (0:00:01.020) 0:00:40.226 ********* 2026-03-03 00:50:02.624893 | orchestrator | =============================================================================== 2026-03-03 00:50:02.624900 | orchestrator | Write configuration file ------------------------------------------------ 3.82s 2026-03-03 00:50:02.624907 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-03-03 00:50:02.624915 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.15s 2026-03-03 00:50:02.624922 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-03-03 00:50:02.624930 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2026-03-03 00:50:02.624938 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-03-03 00:50:02.624946 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-03-03 00:50:02.624952 | orchestrator | Print configuration data ------------------------------------------------ 0.78s 2026-03-03 00:50:02.624960 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-03 00:50:02.624967 | orchestrator | Print WAL devices ------------------------------------------------------- 0.72s 2026-03-03 00:50:02.624975 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-03-03 00:50:02.624982 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-03 00:50:02.624990 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.67s 2026-03-03 00:50:02.625005 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-03 00:50:02.974861 | orchestrator | Set DB devices config data ---------------------------------------------- 0.63s 2026-03-03 00:50:02.974919 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2026-03-03 00:50:02.974926 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-03 00:50:02.974931 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-03 00:50:02.974937 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.52s 2026-03-03 00:50:02.974942 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-03-03 00:50:25.736470 | orchestrator | 2026-03-03 00:50:25 | INFO  | Task 93152042-6251-44ff-a9be-b1256c014c21 (sync inventory) is running in background. Output coming soon. 2026-03-03 00:50:49.230427 | orchestrator | 2026-03-03 00:50:27 | INFO  | Starting group_vars file reorganization 2026-03-03 00:50:49.230506 | orchestrator | 2026-03-03 00:50:27 | INFO  | Moved 0 file(s) to their respective directories 2026-03-03 00:50:49.230514 | orchestrator | 2026-03-03 00:50:27 | INFO  | Group_vars file reorganization completed 2026-03-03 00:50:49.230520 | orchestrator | 2026-03-03 00:50:29 | INFO  | Starting variable preparation from inventory 2026-03-03 00:50:49.230528 | orchestrator | 2026-03-03 00:50:31 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-03 00:50:49.230538 | orchestrator | 2026-03-03 00:50:31 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-03 00:50:49.230547 | orchestrator | 2026-03-03 00:50:31 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-03 00:50:49.230553 | orchestrator | 2026-03-03 00:50:31 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-03 00:50:49.230559 | orchestrator | 2026-03-03 00:50:31 | INFO  | Variable preparation completed 2026-03-03 00:50:49.230565 | orchestrator | 2026-03-03 00:50:32 | INFO  | Starting inventory overwrite handling 2026-03-03 00:50:49.230571 | orchestrator | 2026-03-03 00:50:32 | INFO  | Handling group overwrites in 99-overwrite 2026-03-03 00:50:49.230577 | orchestrator | 2026-03-03 00:50:32 | INFO  | Removing group frr:children from 60-generic 2026-03-03 00:50:49.230613 | orchestrator | 2026-03-03 00:50:32 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-03 00:50:49.230621 | orchestrator | 2026-03-03 00:50:32 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-03 00:50:49.230628 | orchestrator | 2026-03-03 00:50:32 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-03 00:50:49.230634 | orchestrator | 2026-03-03 00:50:32 | INFO  | Handling group overwrites in 20-roles 2026-03-03 00:50:49.230640 | orchestrator | 2026-03-03 00:50:32 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-03 00:50:49.230647 | orchestrator | 2026-03-03 00:50:32 | INFO  | Removed 5 group(s) in total 2026-03-03 00:50:49.230653 | orchestrator | 2026-03-03 00:50:32 | INFO  | Inventory overwrite handling completed 2026-03-03 00:50:49.230660 | orchestrator | 2026-03-03 00:50:33 | INFO  | Starting merge of inventory files 2026-03-03 00:50:49.230667 | orchestrator | 2026-03-03 00:50:33 | INFO  | Inventory files merged successfully 2026-03-03 00:50:49.230674 | orchestrator | 2026-03-03 00:50:37 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-03 00:50:49.230678 | orchestrator | 2026-03-03 00:50:48 | INFO  | Successfully wrote ClusterShell configuration 2026-03-03 00:50:49.230682 | orchestrator | [master c4d7cc7] 2026-03-03-00-50 2026-03-03 00:50:49.230687 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-03 00:50:51.251038 | orchestrator | 2026-03-03 00:50:51 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-03 00:50:51.308051 | orchestrator | 2026-03-03 00:50:51 | INFO  | Task 070366fe-d0fa-4f37-96f3-b6d2a4033002 (ceph-create-lvm-devices) was prepared for execution. 2026-03-03 00:50:51.308173 | orchestrator | 2026-03-03 00:50:51 | INFO  | It takes a moment until task 070366fe-d0fa-4f37-96f3-b6d2a4033002 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-03 00:51:02.394771 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-03 00:51:02.394897 | orchestrator | 2.16.14 2026-03-03 00:51:02.394910 | orchestrator | 2026-03-03 00:51:02.394918 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-03 00:51:02.394927 | orchestrator | 2026-03-03 00:51:02.394933 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-03 00:51:02.394941 | orchestrator | Tuesday 03 March 2026 00:50:55 +0000 (0:00:00.288) 0:00:00.288 ********* 2026-03-03 00:51:02.394949 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-03 00:51:02.394957 | orchestrator | 2026-03-03 00:51:02.394963 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-03 00:51:02.394970 | orchestrator | Tuesday 03 March 2026 00:50:55 +0000 (0:00:00.223) 0:00:00.512 ********* 2026-03-03 00:51:02.394976 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:02.394983 | orchestrator | 2026-03-03 00:51:02.394990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.394997 | orchestrator | Tuesday 03 March 2026 00:50:55 +0000 (0:00:00.198) 0:00:00.710 ********* 2026-03-03 00:51:02.395004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-03 00:51:02.395011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-03 00:51:02.395018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-03 00:51:02.395025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-03 00:51:02.395032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-03 00:51:02.395039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-03 00:51:02.395046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-03 00:51:02.395101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-03 00:51:02.395108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-03 00:51:02.395114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-03 00:51:02.395120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-03 00:51:02.395126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-03 00:51:02.395146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-03 00:51:02.395153 | orchestrator | 2026-03-03 00:51:02.395159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395166 | orchestrator | Tuesday 03 March 2026 00:50:56 +0000 (0:00:00.443) 0:00:01.153 ********* 2026-03-03 00:51:02.395173 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395180 | orchestrator | 2026-03-03 00:51:02.395186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395192 | orchestrator | Tuesday 03 March 2026 00:50:56 +0000 (0:00:00.213) 0:00:01.367 ********* 2026-03-03 00:51:02.395198 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395204 | orchestrator | 2026-03-03 00:51:02.395211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395217 | orchestrator | Tuesday 03 March 2026 00:50:56 +0000 (0:00:00.211) 0:00:01.579 ********* 2026-03-03 00:51:02.395223 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395229 | orchestrator | 2026-03-03 00:51:02.395236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395242 | orchestrator | Tuesday 03 March 2026 00:50:56 +0000 (0:00:00.164) 0:00:01.743 ********* 2026-03-03 00:51:02.395249 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395256 | orchestrator | 2026-03-03 00:51:02.395263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395269 | orchestrator | Tuesday 03 March 2026 00:50:57 +0000 (0:00:00.191) 0:00:01.935 ********* 2026-03-03 00:51:02.395277 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395284 | orchestrator | 2026-03-03 00:51:02.395292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395299 | orchestrator | Tuesday 03 March 2026 00:50:57 +0000 (0:00:00.197) 0:00:02.133 ********* 2026-03-03 00:51:02.395307 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395314 | orchestrator | 2026-03-03 00:51:02.395322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395329 | orchestrator | Tuesday 03 March 2026 00:50:57 +0000 (0:00:00.200) 0:00:02.333 ********* 2026-03-03 00:51:02.395337 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395344 | orchestrator | 2026-03-03 00:51:02.395351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395358 | orchestrator | Tuesday 03 March 2026 00:50:57 +0000 (0:00:00.181) 0:00:02.515 ********* 2026-03-03 00:51:02.395365 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395374 | orchestrator | 2026-03-03 00:51:02.395381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395388 | orchestrator | Tuesday 03 March 2026 00:50:57 +0000 (0:00:00.186) 0:00:02.702 ********* 2026-03-03 00:51:02.395395 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2) 2026-03-03 00:51:02.395402 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2) 2026-03-03 00:51:02.395409 | orchestrator | 2026-03-03 00:51:02.395417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395446 | orchestrator | Tuesday 03 March 2026 00:50:58 +0000 (0:00:00.414) 0:00:03.116 ********* 2026-03-03 00:51:02.395464 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702) 2026-03-03 00:51:02.395472 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702) 2026-03-03 00:51:02.395480 | orchestrator | 2026-03-03 00:51:02.395487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395494 | orchestrator | Tuesday 03 March 2026 00:50:58 +0000 (0:00:00.543) 0:00:03.660 ********* 2026-03-03 00:51:02.395501 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473) 2026-03-03 00:51:02.395507 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473) 2026-03-03 00:51:02.395515 | orchestrator | 2026-03-03 00:51:02.395522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395529 | orchestrator | Tuesday 03 March 2026 00:50:59 +0000 (0:00:00.574) 0:00:04.234 ********* 2026-03-03 00:51:02.395536 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8) 2026-03-03 00:51:02.395544 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8) 2026-03-03 00:51:02.395551 | orchestrator | 2026-03-03 00:51:02.395558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:02.395566 | orchestrator | Tuesday 03 March 2026 00:51:00 +0000 (0:00:00.694) 0:00:04.929 ********* 2026-03-03 00:51:02.395573 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-03 00:51:02.395580 | orchestrator | 2026-03-03 00:51:02.395587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395595 | orchestrator | Tuesday 03 March 2026 00:51:00 +0000 (0:00:00.304) 0:00:05.233 ********* 2026-03-03 00:51:02.395602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-03 00:51:02.395610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-03 00:51:02.395617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-03 00:51:02.395624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-03 00:51:02.395632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-03 00:51:02.395659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-03 00:51:02.395667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-03 00:51:02.395675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-03 00:51:02.395682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-03 00:51:02.395690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-03 00:51:02.395698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-03 00:51:02.395705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-03 00:51:02.395713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-03 00:51:02.395720 | orchestrator | 2026-03-03 00:51:02.395728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395736 | orchestrator | Tuesday 03 March 2026 00:51:00 +0000 (0:00:00.482) 0:00:05.716 ********* 2026-03-03 00:51:02.395743 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395751 | orchestrator | 2026-03-03 00:51:02.395759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395766 | orchestrator | Tuesday 03 March 2026 00:51:01 +0000 (0:00:00.240) 0:00:05.956 ********* 2026-03-03 00:51:02.395781 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395789 | orchestrator | 2026-03-03 00:51:02.395797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395804 | orchestrator | Tuesday 03 March 2026 00:51:01 +0000 (0:00:00.221) 0:00:06.177 ********* 2026-03-03 00:51:02.395812 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395819 | orchestrator | 2026-03-03 00:51:02.395827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395834 | orchestrator | Tuesday 03 March 2026 00:51:01 +0000 (0:00:00.255) 0:00:06.433 ********* 2026-03-03 00:51:02.395842 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395849 | orchestrator | 2026-03-03 00:51:02.395857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395864 | orchestrator | Tuesday 03 March 2026 00:51:01 +0000 (0:00:00.208) 0:00:06.641 ********* 2026-03-03 00:51:02.395872 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395880 | orchestrator | 2026-03-03 00:51:02.395887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395906 | orchestrator | Tuesday 03 March 2026 00:51:01 +0000 (0:00:00.239) 0:00:06.881 ********* 2026-03-03 00:51:02.395914 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395922 | orchestrator | 2026-03-03 00:51:02.395929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:02.395937 | orchestrator | Tuesday 03 March 2026 00:51:02 +0000 (0:00:00.205) 0:00:07.087 ********* 2026-03-03 00:51:02.395944 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:02.395952 | orchestrator | 2026-03-03 00:51:02.395967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:10.313900 | orchestrator | Tuesday 03 March 2026 00:51:02 +0000 (0:00:00.217) 0:00:07.305 ********* 2026-03-03 00:51:10.313982 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.313993 | orchestrator | 2026-03-03 00:51:10.314001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:10.314008 | orchestrator | Tuesday 03 March 2026 00:51:02 +0000 (0:00:00.255) 0:00:07.560 ********* 2026-03-03 00:51:10.314049 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-03 00:51:10.314079 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-03 00:51:10.314086 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-03 00:51:10.314093 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-03 00:51:10.314100 | orchestrator | 2026-03-03 00:51:10.314106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:10.314113 | orchestrator | Tuesday 03 March 2026 00:51:03 +0000 (0:00:01.127) 0:00:08.687 ********* 2026-03-03 00:51:10.314120 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314126 | orchestrator | 2026-03-03 00:51:10.314133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:10.314139 | orchestrator | Tuesday 03 March 2026 00:51:03 +0000 (0:00:00.205) 0:00:08.893 ********* 2026-03-03 00:51:10.314146 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314152 | orchestrator | 2026-03-03 00:51:10.314159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:10.314165 | orchestrator | Tuesday 03 March 2026 00:51:04 +0000 (0:00:00.213) 0:00:09.106 ********* 2026-03-03 00:51:10.314172 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314178 | orchestrator | 2026-03-03 00:51:10.314185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:10.314191 | orchestrator | Tuesday 03 March 2026 00:51:04 +0000 (0:00:00.263) 0:00:09.370 ********* 2026-03-03 00:51:10.314198 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314204 | orchestrator | 2026-03-03 00:51:10.314211 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-03 00:51:10.314217 | orchestrator | Tuesday 03 March 2026 00:51:04 +0000 (0:00:00.243) 0:00:09.613 ********* 2026-03-03 00:51:10.314224 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314248 | orchestrator | 2026-03-03 00:51:10.314255 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-03 00:51:10.314262 | orchestrator | Tuesday 03 March 2026 00:51:04 +0000 (0:00:00.134) 0:00:09.748 ********* 2026-03-03 00:51:10.314269 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '896495c2-660d-5a75-b418-75215a0ec973'}}) 2026-03-03 00:51:10.314276 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd486d743-7c4f-58d7-8950-e96875d5f319'}}) 2026-03-03 00:51:10.314282 | orchestrator | 2026-03-03 00:51:10.314298 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-03 00:51:10.314305 | orchestrator | Tuesday 03 March 2026 00:51:05 +0000 (0:00:00.222) 0:00:09.971 ********* 2026-03-03 00:51:10.314313 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'}) 2026-03-03 00:51:10.314320 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'}) 2026-03-03 00:51:10.314327 | orchestrator | 2026-03-03 00:51:10.314333 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-03 00:51:10.314340 | orchestrator | Tuesday 03 March 2026 00:51:07 +0000 (0:00:01.970) 0:00:11.941 ********* 2026-03-03 00:51:10.314347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314361 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314367 | orchestrator | 2026-03-03 00:51:10.314375 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-03 00:51:10.314381 | orchestrator | Tuesday 03 March 2026 00:51:07 +0000 (0:00:00.164) 0:00:12.105 ********* 2026-03-03 00:51:10.314388 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'}) 2026-03-03 00:51:10.314394 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'}) 2026-03-03 00:51:10.314401 | orchestrator | 2026-03-03 00:51:10.314407 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-03 00:51:10.314414 | orchestrator | Tuesday 03 March 2026 00:51:08 +0000 (0:00:01.352) 0:00:13.458 ********* 2026-03-03 00:51:10.314419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314432 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314438 | orchestrator | 2026-03-03 00:51:10.314445 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-03 00:51:10.314451 | orchestrator | Tuesday 03 March 2026 00:51:08 +0000 (0:00:00.145) 0:00:13.603 ********* 2026-03-03 00:51:10.314472 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314479 | orchestrator | 2026-03-03 00:51:10.314486 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-03 00:51:10.314494 | orchestrator | Tuesday 03 March 2026 00:51:08 +0000 (0:00:00.131) 0:00:13.735 ********* 2026-03-03 00:51:10.314501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314523 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314530 | orchestrator | 2026-03-03 00:51:10.314538 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-03 00:51:10.314545 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.283) 0:00:14.018 ********* 2026-03-03 00:51:10.314552 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314560 | orchestrator | 2026-03-03 00:51:10.314567 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-03 00:51:10.314574 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.121) 0:00:14.139 ********* 2026-03-03 00:51:10.314581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314596 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314603 | orchestrator | 2026-03-03 00:51:10.314610 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-03 00:51:10.314620 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.144) 0:00:14.284 ********* 2026-03-03 00:51:10.314627 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314635 | orchestrator | 2026-03-03 00:51:10.314642 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-03 00:51:10.314649 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.138) 0:00:14.422 ********* 2026-03-03 00:51:10.314657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314671 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314678 | orchestrator | 2026-03-03 00:51:10.314686 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-03 00:51:10.314693 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.150) 0:00:14.573 ********* 2026-03-03 00:51:10.314700 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:10.314707 | orchestrator | 2026-03-03 00:51:10.314714 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-03 00:51:10.314721 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.126) 0:00:14.699 ********* 2026-03-03 00:51:10.314737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314751 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314759 | orchestrator | 2026-03-03 00:51:10.314766 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-03 00:51:10.314773 | orchestrator | Tuesday 03 March 2026 00:51:09 +0000 (0:00:00.155) 0:00:14.855 ********* 2026-03-03 00:51:10.314780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314795 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314802 | orchestrator | 2026-03-03 00:51:10.314809 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-03 00:51:10.314821 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.139) 0:00:14.995 ********* 2026-03-03 00:51:10.314828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:10.314835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:10.314842 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314850 | orchestrator | 2026-03-03 00:51:10.314857 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-03 00:51:10.314864 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.130) 0:00:15.125 ********* 2026-03-03 00:51:10.314871 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:10.314878 | orchestrator | 2026-03-03 00:51:10.314884 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-03 00:51:10.314896 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.101) 0:00:15.227 ********* 2026-03-03 00:51:16.852318 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852379 | orchestrator | 2026-03-03 00:51:16.852386 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-03 00:51:16.852393 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.103) 0:00:15.331 ********* 2026-03-03 00:51:16.852398 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852403 | orchestrator | 2026-03-03 00:51:16.852409 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-03 00:51:16.852414 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.134) 0:00:15.465 ********* 2026-03-03 00:51:16.852419 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 00:51:16.852424 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-03 00:51:16.852430 | orchestrator | } 2026-03-03 00:51:16.852435 | orchestrator | 2026-03-03 00:51:16.852451 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-03 00:51:16.852457 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.306) 0:00:15.771 ********* 2026-03-03 00:51:16.852462 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 00:51:16.852467 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-03 00:51:16.852473 | orchestrator | } 2026-03-03 00:51:16.852478 | orchestrator | 2026-03-03 00:51:16.852483 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-03 00:51:16.852488 | orchestrator | Tuesday 03 March 2026 00:51:10 +0000 (0:00:00.143) 0:00:15.915 ********* 2026-03-03 00:51:16.852493 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 00:51:16.852498 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-03 00:51:16.852504 | orchestrator | } 2026-03-03 00:51:16.852509 | orchestrator | 2026-03-03 00:51:16.852514 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-03 00:51:16.852519 | orchestrator | Tuesday 03 March 2026 00:51:11 +0000 (0:00:00.146) 0:00:16.062 ********* 2026-03-03 00:51:16.852524 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:16.852529 | orchestrator | 2026-03-03 00:51:16.852533 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-03 00:51:16.852539 | orchestrator | Tuesday 03 March 2026 00:51:11 +0000 (0:00:00.679) 0:00:16.741 ********* 2026-03-03 00:51:16.852544 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:16.852549 | orchestrator | 2026-03-03 00:51:16.852554 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-03 00:51:16.852559 | orchestrator | Tuesday 03 March 2026 00:51:12 +0000 (0:00:00.550) 0:00:17.291 ********* 2026-03-03 00:51:16.852564 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:16.852569 | orchestrator | 2026-03-03 00:51:16.852574 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-03 00:51:16.852579 | orchestrator | Tuesday 03 March 2026 00:51:12 +0000 (0:00:00.524) 0:00:17.816 ********* 2026-03-03 00:51:16.852585 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:16.852590 | orchestrator | 2026-03-03 00:51:16.852612 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-03 00:51:16.852618 | orchestrator | Tuesday 03 March 2026 00:51:13 +0000 (0:00:00.173) 0:00:17.990 ********* 2026-03-03 00:51:16.852623 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852628 | orchestrator | 2026-03-03 00:51:16.852633 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-03 00:51:16.852638 | orchestrator | Tuesday 03 March 2026 00:51:13 +0000 (0:00:00.128) 0:00:18.118 ********* 2026-03-03 00:51:16.852643 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852648 | orchestrator | 2026-03-03 00:51:16.852653 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-03 00:51:16.852658 | orchestrator | Tuesday 03 March 2026 00:51:13 +0000 (0:00:00.112) 0:00:18.231 ********* 2026-03-03 00:51:16.852663 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 00:51:16.852669 | orchestrator |  "vgs_report": { 2026-03-03 00:51:16.852674 | orchestrator |  "vg": [] 2026-03-03 00:51:16.852679 | orchestrator |  } 2026-03-03 00:51:16.852684 | orchestrator | } 2026-03-03 00:51:16.852689 | orchestrator | 2026-03-03 00:51:16.852694 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-03 00:51:16.852699 | orchestrator | Tuesday 03 March 2026 00:51:13 +0000 (0:00:00.181) 0:00:18.412 ********* 2026-03-03 00:51:16.852704 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852709 | orchestrator | 2026-03-03 00:51:16.852714 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-03 00:51:16.852719 | orchestrator | Tuesday 03 March 2026 00:51:13 +0000 (0:00:00.132) 0:00:18.545 ********* 2026-03-03 00:51:16.852724 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852730 | orchestrator | 2026-03-03 00:51:16.852735 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-03 00:51:16.852740 | orchestrator | Tuesday 03 March 2026 00:51:13 +0000 (0:00:00.130) 0:00:18.676 ********* 2026-03-03 00:51:16.852745 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852750 | orchestrator | 2026-03-03 00:51:16.852755 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-03 00:51:16.852761 | orchestrator | Tuesday 03 March 2026 00:51:14 +0000 (0:00:00.409) 0:00:19.085 ********* 2026-03-03 00:51:16.852766 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852771 | orchestrator | 2026-03-03 00:51:16.852776 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-03 00:51:16.852781 | orchestrator | Tuesday 03 March 2026 00:51:14 +0000 (0:00:00.146) 0:00:19.232 ********* 2026-03-03 00:51:16.852786 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852791 | orchestrator | 2026-03-03 00:51:16.852805 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-03 00:51:16.852810 | orchestrator | Tuesday 03 March 2026 00:51:14 +0000 (0:00:00.144) 0:00:19.376 ********* 2026-03-03 00:51:16.852815 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852820 | orchestrator | 2026-03-03 00:51:16.852826 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-03 00:51:16.852831 | orchestrator | Tuesday 03 March 2026 00:51:14 +0000 (0:00:00.140) 0:00:19.517 ********* 2026-03-03 00:51:16.852836 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852841 | orchestrator | 2026-03-03 00:51:16.852846 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-03 00:51:16.852851 | orchestrator | Tuesday 03 March 2026 00:51:14 +0000 (0:00:00.142) 0:00:19.659 ********* 2026-03-03 00:51:16.852865 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852871 | orchestrator | 2026-03-03 00:51:16.852876 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-03 00:51:16.852881 | orchestrator | Tuesday 03 March 2026 00:51:14 +0000 (0:00:00.131) 0:00:19.790 ********* 2026-03-03 00:51:16.852886 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852891 | orchestrator | 2026-03-03 00:51:16.852896 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-03 00:51:16.852907 | orchestrator | Tuesday 03 March 2026 00:51:15 +0000 (0:00:00.130) 0:00:19.921 ********* 2026-03-03 00:51:16.852913 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852918 | orchestrator | 2026-03-03 00:51:16.852923 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-03 00:51:16.852929 | orchestrator | Tuesday 03 March 2026 00:51:15 +0000 (0:00:00.134) 0:00:20.056 ********* 2026-03-03 00:51:16.852934 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852939 | orchestrator | 2026-03-03 00:51:16.852945 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-03 00:51:16.852950 | orchestrator | Tuesday 03 March 2026 00:51:15 +0000 (0:00:00.143) 0:00:20.200 ********* 2026-03-03 00:51:16.852955 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852961 | orchestrator | 2026-03-03 00:51:16.852966 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-03 00:51:16.852971 | orchestrator | Tuesday 03 March 2026 00:51:15 +0000 (0:00:00.185) 0:00:20.385 ********* 2026-03-03 00:51:16.852977 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.852982 | orchestrator | 2026-03-03 00:51:16.853009 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-03 00:51:16.853015 | orchestrator | Tuesday 03 March 2026 00:51:15 +0000 (0:00:00.143) 0:00:20.528 ********* 2026-03-03 00:51:16.853020 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.853026 | orchestrator | 2026-03-03 00:51:16.853031 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-03 00:51:16.853119 | orchestrator | Tuesday 03 March 2026 00:51:15 +0000 (0:00:00.146) 0:00:20.675 ********* 2026-03-03 00:51:16.853136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:16.853145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:16.853150 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.853155 | orchestrator | 2026-03-03 00:51:16.853161 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-03 00:51:16.853172 | orchestrator | Tuesday 03 March 2026 00:51:16 +0000 (0:00:00.409) 0:00:21.084 ********* 2026-03-03 00:51:16.853178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:16.853183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:16.853187 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.853191 | orchestrator | 2026-03-03 00:51:16.853195 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-03 00:51:16.853198 | orchestrator | Tuesday 03 March 2026 00:51:16 +0000 (0:00:00.151) 0:00:21.235 ********* 2026-03-03 00:51:16.853202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:16.853206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:16.853210 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.853213 | orchestrator | 2026-03-03 00:51:16.853217 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-03 00:51:16.853221 | orchestrator | Tuesday 03 March 2026 00:51:16 +0000 (0:00:00.170) 0:00:21.406 ********* 2026-03-03 00:51:16.853224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:16.853228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:16.853237 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.853240 | orchestrator | 2026-03-03 00:51:16.853244 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-03 00:51:16.853247 | orchestrator | Tuesday 03 March 2026 00:51:16 +0000 (0:00:00.142) 0:00:21.548 ********* 2026-03-03 00:51:16.853251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:16.853255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:16.853258 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:16.853262 | orchestrator | 2026-03-03 00:51:16.853266 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-03 00:51:16.853269 | orchestrator | Tuesday 03 March 2026 00:51:16 +0000 (0:00:00.160) 0:00:21.709 ********* 2026-03-03 00:51:16.853280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:22.040738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:22.040795 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:22.040801 | orchestrator | 2026-03-03 00:51:22.040806 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-03 00:51:22.040811 | orchestrator | Tuesday 03 March 2026 00:51:16 +0000 (0:00:00.141) 0:00:21.851 ********* 2026-03-03 00:51:22.040815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:22.040820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:22.040824 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:22.040828 | orchestrator | 2026-03-03 00:51:22.040832 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-03 00:51:22.040836 | orchestrator | Tuesday 03 March 2026 00:51:17 +0000 (0:00:00.162) 0:00:22.014 ********* 2026-03-03 00:51:22.040840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:22.040844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:22.040848 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:22.040852 | orchestrator | 2026-03-03 00:51:22.040856 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-03 00:51:22.040860 | orchestrator | Tuesday 03 March 2026 00:51:17 +0000 (0:00:00.154) 0:00:22.169 ********* 2026-03-03 00:51:22.040864 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:22.040868 | orchestrator | 2026-03-03 00:51:22.040872 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-03 00:51:22.040876 | orchestrator | Tuesday 03 March 2026 00:51:17 +0000 (0:00:00.419) 0:00:22.588 ********* 2026-03-03 00:51:22.040880 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:22.040884 | orchestrator | 2026-03-03 00:51:22.040888 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-03 00:51:22.040892 | orchestrator | Tuesday 03 March 2026 00:51:18 +0000 (0:00:00.499) 0:00:23.087 ********* 2026-03-03 00:51:22.040896 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:51:22.040899 | orchestrator | 2026-03-03 00:51:22.040903 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-03 00:51:22.040907 | orchestrator | Tuesday 03 March 2026 00:51:18 +0000 (0:00:00.171) 0:00:23.259 ********* 2026-03-03 00:51:22.040924 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'vg_name': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'}) 2026-03-03 00:51:22.040929 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'vg_name': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'}) 2026-03-03 00:51:22.040933 | orchestrator | 2026-03-03 00:51:22.040937 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-03 00:51:22.040941 | orchestrator | Tuesday 03 March 2026 00:51:18 +0000 (0:00:00.195) 0:00:23.455 ********* 2026-03-03 00:51:22.040953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:22.040957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:22.040961 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:22.040965 | orchestrator | 2026-03-03 00:51:22.040970 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-03 00:51:22.040974 | orchestrator | Tuesday 03 March 2026 00:51:18 +0000 (0:00:00.377) 0:00:23.832 ********* 2026-03-03 00:51:22.040978 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:22.040982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:22.040986 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:22.040996 | orchestrator | 2026-03-03 00:51:22.041000 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-03 00:51:22.041004 | orchestrator | Tuesday 03 March 2026 00:51:19 +0000 (0:00:00.159) 0:00:23.992 ********* 2026-03-03 00:51:22.041008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'})  2026-03-03 00:51:22.041012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'})  2026-03-03 00:51:22.041016 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:51:22.041020 | orchestrator | 2026-03-03 00:51:22.041024 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-03 00:51:22.041108 | orchestrator | Tuesday 03 March 2026 00:51:19 +0000 (0:00:00.172) 0:00:24.164 ********* 2026-03-03 00:51:22.041127 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 00:51:22.041133 | orchestrator |  "lvm_report": { 2026-03-03 00:51:22.041138 | orchestrator |  "lv": [ 2026-03-03 00:51:22.041142 | orchestrator |  { 2026-03-03 00:51:22.041146 | orchestrator |  "lv_name": "osd-block-896495c2-660d-5a75-b418-75215a0ec973", 2026-03-03 00:51:22.041150 | orchestrator |  "vg_name": "ceph-896495c2-660d-5a75-b418-75215a0ec973" 2026-03-03 00:51:22.041155 | orchestrator |  }, 2026-03-03 00:51:22.041159 | orchestrator |  { 2026-03-03 00:51:22.041162 | orchestrator |  "lv_name": "osd-block-d486d743-7c4f-58d7-8950-e96875d5f319", 2026-03-03 00:51:22.041166 | orchestrator |  "vg_name": "ceph-d486d743-7c4f-58d7-8950-e96875d5f319" 2026-03-03 00:51:22.041170 | orchestrator |  } 2026-03-03 00:51:22.041174 | orchestrator |  ], 2026-03-03 00:51:22.041178 | orchestrator |  "pv": [ 2026-03-03 00:51:22.041182 | orchestrator |  { 2026-03-03 00:51:22.041186 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-03 00:51:22.041190 | orchestrator |  "vg_name": "ceph-896495c2-660d-5a75-b418-75215a0ec973" 2026-03-03 00:51:22.041200 | orchestrator |  }, 2026-03-03 00:51:22.041204 | orchestrator |  { 2026-03-03 00:51:22.041214 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-03 00:51:22.041218 | orchestrator |  "vg_name": "ceph-d486d743-7c4f-58d7-8950-e96875d5f319" 2026-03-03 00:51:22.041222 | orchestrator |  } 2026-03-03 00:51:22.041226 | orchestrator |  ] 2026-03-03 00:51:22.041230 | orchestrator |  } 2026-03-03 00:51:22.041234 | orchestrator | } 2026-03-03 00:51:22.041238 | orchestrator | 2026-03-03 00:51:22.041242 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-03 00:51:22.041246 | orchestrator | 2026-03-03 00:51:22.041250 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-03 00:51:22.041254 | orchestrator | Tuesday 03 March 2026 00:51:19 +0000 (0:00:00.315) 0:00:24.480 ********* 2026-03-03 00:51:22.041258 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-03 00:51:22.041262 | orchestrator | 2026-03-03 00:51:22.041266 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-03 00:51:22.041270 | orchestrator | Tuesday 03 March 2026 00:51:19 +0000 (0:00:00.238) 0:00:24.719 ********* 2026-03-03 00:51:22.041274 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:22.041278 | orchestrator | 2026-03-03 00:51:22.041282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041286 | orchestrator | Tuesday 03 March 2026 00:51:20 +0000 (0:00:00.232) 0:00:24.951 ********* 2026-03-03 00:51:22.041293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-03 00:51:22.041298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-03 00:51:22.041302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-03 00:51:22.041306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-03 00:51:22.041310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-03 00:51:22.041314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-03 00:51:22.041318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-03 00:51:22.041322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-03 00:51:22.041327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-03 00:51:22.041331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-03 00:51:22.041336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-03 00:51:22.041341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-03 00:51:22.041346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-03 00:51:22.041355 | orchestrator | 2026-03-03 00:51:22.041359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041364 | orchestrator | Tuesday 03 March 2026 00:51:20 +0000 (0:00:00.410) 0:00:25.362 ********* 2026-03-03 00:51:22.041369 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:22.041374 | orchestrator | 2026-03-03 00:51:22.041378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041383 | orchestrator | Tuesday 03 March 2026 00:51:20 +0000 (0:00:00.212) 0:00:25.574 ********* 2026-03-03 00:51:22.041388 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:22.041392 | orchestrator | 2026-03-03 00:51:22.041397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041402 | orchestrator | Tuesday 03 March 2026 00:51:20 +0000 (0:00:00.191) 0:00:25.766 ********* 2026-03-03 00:51:22.041407 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:22.041412 | orchestrator | 2026-03-03 00:51:22.041416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041424 | orchestrator | Tuesday 03 March 2026 00:51:21 +0000 (0:00:00.615) 0:00:26.381 ********* 2026-03-03 00:51:22.041429 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:22.041432 | orchestrator | 2026-03-03 00:51:22.041436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041440 | orchestrator | Tuesday 03 March 2026 00:51:21 +0000 (0:00:00.203) 0:00:26.585 ********* 2026-03-03 00:51:22.041444 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:22.041448 | orchestrator | 2026-03-03 00:51:22.041452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:22.041457 | orchestrator | Tuesday 03 March 2026 00:51:21 +0000 (0:00:00.178) 0:00:26.763 ********* 2026-03-03 00:51:22.041461 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:22.041465 | orchestrator | 2026-03-03 00:51:22.041477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196235 | orchestrator | Tuesday 03 March 2026 00:51:22 +0000 (0:00:00.192) 0:00:26.956 ********* 2026-03-03 00:51:33.196338 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196350 | orchestrator | 2026-03-03 00:51:33.196358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196365 | orchestrator | Tuesday 03 March 2026 00:51:22 +0000 (0:00:00.206) 0:00:27.163 ********* 2026-03-03 00:51:33.196371 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196377 | orchestrator | 2026-03-03 00:51:33.196384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196391 | orchestrator | Tuesday 03 March 2026 00:51:22 +0000 (0:00:00.206) 0:00:27.370 ********* 2026-03-03 00:51:33.196397 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78) 2026-03-03 00:51:33.196405 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78) 2026-03-03 00:51:33.196413 | orchestrator | 2026-03-03 00:51:33.196420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196427 | orchestrator | Tuesday 03 March 2026 00:51:22 +0000 (0:00:00.429) 0:00:27.799 ********* 2026-03-03 00:51:33.196435 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc) 2026-03-03 00:51:33.196442 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc) 2026-03-03 00:51:33.196449 | orchestrator | 2026-03-03 00:51:33.196457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196464 | orchestrator | Tuesday 03 March 2026 00:51:23 +0000 (0:00:00.427) 0:00:28.227 ********* 2026-03-03 00:51:33.196471 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d) 2026-03-03 00:51:33.196478 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d) 2026-03-03 00:51:33.196496 | orchestrator | 2026-03-03 00:51:33.196518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196525 | orchestrator | Tuesday 03 March 2026 00:51:23 +0000 (0:00:00.461) 0:00:28.688 ********* 2026-03-03 00:51:33.196549 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4) 2026-03-03 00:51:33.196555 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4) 2026-03-03 00:51:33.196561 | orchestrator | 2026-03-03 00:51:33.196566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:33.196572 | orchestrator | Tuesday 03 March 2026 00:51:24 +0000 (0:00:00.707) 0:00:29.395 ********* 2026-03-03 00:51:33.196579 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-03 00:51:33.196586 | orchestrator | 2026-03-03 00:51:33.196593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196599 | orchestrator | Tuesday 03 March 2026 00:51:25 +0000 (0:00:00.590) 0:00:29.986 ********* 2026-03-03 00:51:33.196626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-03 00:51:33.196635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-03 00:51:33.196641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-03 00:51:33.196646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-03 00:51:33.196655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-03 00:51:33.196663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-03 00:51:33.196669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-03 00:51:33.196675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-03 00:51:33.196680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-03 00:51:33.196686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-03 00:51:33.196692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-03 00:51:33.196697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-03 00:51:33.196702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-03 00:51:33.196708 | orchestrator | 2026-03-03 00:51:33.196713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196720 | orchestrator | Tuesday 03 March 2026 00:51:25 +0000 (0:00:00.879) 0:00:30.865 ********* 2026-03-03 00:51:33.196725 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196731 | orchestrator | 2026-03-03 00:51:33.196737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196742 | orchestrator | Tuesday 03 March 2026 00:51:26 +0000 (0:00:00.187) 0:00:31.053 ********* 2026-03-03 00:51:33.196749 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196755 | orchestrator | 2026-03-03 00:51:33.196761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196768 | orchestrator | Tuesday 03 March 2026 00:51:26 +0000 (0:00:00.220) 0:00:31.274 ********* 2026-03-03 00:51:33.196775 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196782 | orchestrator | 2026-03-03 00:51:33.196806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196813 | orchestrator | Tuesday 03 March 2026 00:51:26 +0000 (0:00:00.201) 0:00:31.475 ********* 2026-03-03 00:51:33.196818 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196824 | orchestrator | 2026-03-03 00:51:33.196831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196840 | orchestrator | Tuesday 03 March 2026 00:51:26 +0000 (0:00:00.208) 0:00:31.683 ********* 2026-03-03 00:51:33.196848 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196853 | orchestrator | 2026-03-03 00:51:33.196859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196865 | orchestrator | Tuesday 03 March 2026 00:51:26 +0000 (0:00:00.208) 0:00:31.892 ********* 2026-03-03 00:51:33.196871 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196877 | orchestrator | 2026-03-03 00:51:33.196883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196891 | orchestrator | Tuesday 03 March 2026 00:51:27 +0000 (0:00:00.226) 0:00:32.118 ********* 2026-03-03 00:51:33.196899 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196905 | orchestrator | 2026-03-03 00:51:33.196911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196917 | orchestrator | Tuesday 03 March 2026 00:51:27 +0000 (0:00:00.201) 0:00:32.319 ********* 2026-03-03 00:51:33.196931 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.196938 | orchestrator | 2026-03-03 00:51:33.196945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.196952 | orchestrator | Tuesday 03 March 2026 00:51:27 +0000 (0:00:00.197) 0:00:32.516 ********* 2026-03-03 00:51:33.196958 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-03 00:51:33.196965 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-03 00:51:33.196973 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-03 00:51:33.196978 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-03 00:51:33.196984 | orchestrator | 2026-03-03 00:51:33.196993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.197001 | orchestrator | Tuesday 03 March 2026 00:51:28 +0000 (0:00:00.870) 0:00:33.387 ********* 2026-03-03 00:51:33.197055 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.197062 | orchestrator | 2026-03-03 00:51:33.197068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.197074 | orchestrator | Tuesday 03 March 2026 00:51:28 +0000 (0:00:00.186) 0:00:33.574 ********* 2026-03-03 00:51:33.197081 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.197087 | orchestrator | 2026-03-03 00:51:33.197093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.197110 | orchestrator | Tuesday 03 March 2026 00:51:29 +0000 (0:00:00.558) 0:00:34.132 ********* 2026-03-03 00:51:33.197115 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.197119 | orchestrator | 2026-03-03 00:51:33.197122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:33.197126 | orchestrator | Tuesday 03 March 2026 00:51:29 +0000 (0:00:00.193) 0:00:34.326 ********* 2026-03-03 00:51:33.197130 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.197134 | orchestrator | 2026-03-03 00:51:33.197138 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-03 00:51:33.197142 | orchestrator | Tuesday 03 March 2026 00:51:29 +0000 (0:00:00.181) 0:00:34.508 ********* 2026-03-03 00:51:33.197146 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.197150 | orchestrator | 2026-03-03 00:51:33.197154 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-03 00:51:33.197157 | orchestrator | Tuesday 03 March 2026 00:51:29 +0000 (0:00:00.124) 0:00:34.633 ********* 2026-03-03 00:51:33.197161 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}}) 2026-03-03 00:51:33.197166 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60a17889-adeb-5df5-a11b-dee290996ccf'}}) 2026-03-03 00:51:33.197170 | orchestrator | 2026-03-03 00:51:33.197174 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-03 00:51:33.197177 | orchestrator | Tuesday 03 March 2026 00:51:29 +0000 (0:00:00.174) 0:00:34.807 ********* 2026-03-03 00:51:33.197183 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}) 2026-03-03 00:51:33.197188 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'}) 2026-03-03 00:51:33.197192 | orchestrator | 2026-03-03 00:51:33.197196 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-03 00:51:33.197200 | orchestrator | Tuesday 03 March 2026 00:51:31 +0000 (0:00:01.851) 0:00:36.659 ********* 2026-03-03 00:51:33.197204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:33.197209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:33.197219 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:33.197223 | orchestrator | 2026-03-03 00:51:33.197226 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-03 00:51:33.197230 | orchestrator | Tuesday 03 March 2026 00:51:31 +0000 (0:00:00.139) 0:00:36.799 ********* 2026-03-03 00:51:33.197234 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}) 2026-03-03 00:51:33.197244 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'}) 2026-03-03 00:51:39.227317 | orchestrator | 2026-03-03 00:51:39.227373 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-03 00:51:39.227381 | orchestrator | Tuesday 03 March 2026 00:51:33 +0000 (0:00:01.394) 0:00:38.193 ********* 2026-03-03 00:51:39.227387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227402 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227411 | orchestrator | 2026-03-03 00:51:39.227420 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-03 00:51:39.227427 | orchestrator | Tuesday 03 March 2026 00:51:33 +0000 (0:00:00.182) 0:00:38.376 ********* 2026-03-03 00:51:39.227434 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227441 | orchestrator | 2026-03-03 00:51:39.227448 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-03 00:51:39.227455 | orchestrator | Tuesday 03 March 2026 00:51:33 +0000 (0:00:00.133) 0:00:38.509 ********* 2026-03-03 00:51:39.227462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227476 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227483 | orchestrator | 2026-03-03 00:51:39.227490 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-03 00:51:39.227497 | orchestrator | Tuesday 03 March 2026 00:51:33 +0000 (0:00:00.154) 0:00:38.664 ********* 2026-03-03 00:51:39.227503 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227510 | orchestrator | 2026-03-03 00:51:39.227516 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-03 00:51:39.227531 | orchestrator | Tuesday 03 March 2026 00:51:33 +0000 (0:00:00.123) 0:00:38.788 ********* 2026-03-03 00:51:39.227538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227550 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227557 | orchestrator | 2026-03-03 00:51:39.227563 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-03 00:51:39.227569 | orchestrator | Tuesday 03 March 2026 00:51:34 +0000 (0:00:00.306) 0:00:39.095 ********* 2026-03-03 00:51:39.227575 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227582 | orchestrator | 2026-03-03 00:51:39.227588 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-03 00:51:39.227594 | orchestrator | Tuesday 03 March 2026 00:51:34 +0000 (0:00:00.125) 0:00:39.220 ********* 2026-03-03 00:51:39.227601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227628 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227638 | orchestrator | 2026-03-03 00:51:39.227644 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-03 00:51:39.227651 | orchestrator | Tuesday 03 March 2026 00:51:34 +0000 (0:00:00.134) 0:00:39.355 ********* 2026-03-03 00:51:39.227656 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:39.227664 | orchestrator | 2026-03-03 00:51:39.227670 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-03 00:51:39.227676 | orchestrator | Tuesday 03 March 2026 00:51:34 +0000 (0:00:00.143) 0:00:39.498 ********* 2026-03-03 00:51:39.227683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227696 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227702 | orchestrator | 2026-03-03 00:51:39.227708 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-03 00:51:39.227713 | orchestrator | Tuesday 03 March 2026 00:51:34 +0000 (0:00:00.160) 0:00:39.659 ********* 2026-03-03 00:51:39.227720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227733 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227739 | orchestrator | 2026-03-03 00:51:39.227746 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-03 00:51:39.227768 | orchestrator | Tuesday 03 March 2026 00:51:34 +0000 (0:00:00.173) 0:00:39.832 ********* 2026-03-03 00:51:39.227772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:39.227776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:39.227780 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227783 | orchestrator | 2026-03-03 00:51:39.227787 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-03 00:51:39.227791 | orchestrator | Tuesday 03 March 2026 00:51:35 +0000 (0:00:00.139) 0:00:39.972 ********* 2026-03-03 00:51:39.227795 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227798 | orchestrator | 2026-03-03 00:51:39.227802 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-03 00:51:39.227806 | orchestrator | Tuesday 03 March 2026 00:51:35 +0000 (0:00:00.133) 0:00:40.106 ********* 2026-03-03 00:51:39.227810 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227814 | orchestrator | 2026-03-03 00:51:39.227817 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-03 00:51:39.227821 | orchestrator | Tuesday 03 March 2026 00:51:35 +0000 (0:00:00.145) 0:00:40.251 ********* 2026-03-03 00:51:39.227825 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.227829 | orchestrator | 2026-03-03 00:51:39.227832 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-03 00:51:39.227836 | orchestrator | Tuesday 03 March 2026 00:51:35 +0000 (0:00:00.142) 0:00:40.393 ********* 2026-03-03 00:51:39.227841 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 00:51:39.227847 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-03 00:51:39.227861 | orchestrator | } 2026-03-03 00:51:39.227868 | orchestrator | 2026-03-03 00:51:39.227874 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-03 00:51:39.227880 | orchestrator | Tuesday 03 March 2026 00:51:35 +0000 (0:00:00.154) 0:00:40.547 ********* 2026-03-03 00:51:39.227887 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 00:51:39.227893 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-03 00:51:39.227900 | orchestrator | } 2026-03-03 00:51:39.227907 | orchestrator | 2026-03-03 00:51:39.227918 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-03 00:51:39.227923 | orchestrator | Tuesday 03 March 2026 00:51:35 +0000 (0:00:00.150) 0:00:40.697 ********* 2026-03-03 00:51:39.227927 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 00:51:39.227932 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-03 00:51:39.227937 | orchestrator | } 2026-03-03 00:51:39.227941 | orchestrator | 2026-03-03 00:51:39.227946 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-03 00:51:39.227950 | orchestrator | Tuesday 03 March 2026 00:51:36 +0000 (0:00:00.366) 0:00:41.064 ********* 2026-03-03 00:51:39.227954 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:39.227959 | orchestrator | 2026-03-03 00:51:39.227963 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-03 00:51:39.227967 | orchestrator | Tuesday 03 March 2026 00:51:36 +0000 (0:00:00.657) 0:00:41.721 ********* 2026-03-03 00:51:39.227971 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:39.227976 | orchestrator | 2026-03-03 00:51:39.227980 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-03 00:51:39.227985 | orchestrator | Tuesday 03 March 2026 00:51:37 +0000 (0:00:00.600) 0:00:42.322 ********* 2026-03-03 00:51:39.227989 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:39.228021 | orchestrator | 2026-03-03 00:51:39.228025 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-03 00:51:39.228030 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.631) 0:00:42.954 ********* 2026-03-03 00:51:39.228034 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:39.228039 | orchestrator | 2026-03-03 00:51:39.228043 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-03 00:51:39.228047 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.175) 0:00:43.129 ********* 2026-03-03 00:51:39.228051 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.228056 | orchestrator | 2026-03-03 00:51:39.228060 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-03 00:51:39.228064 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.117) 0:00:43.246 ********* 2026-03-03 00:51:39.228068 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.228073 | orchestrator | 2026-03-03 00:51:39.228077 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-03 00:51:39.228082 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.120) 0:00:43.366 ********* 2026-03-03 00:51:39.228086 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 00:51:39.228090 | orchestrator |  "vgs_report": { 2026-03-03 00:51:39.228095 | orchestrator |  "vg": [] 2026-03-03 00:51:39.228099 | orchestrator |  } 2026-03-03 00:51:39.228103 | orchestrator | } 2026-03-03 00:51:39.228108 | orchestrator | 2026-03-03 00:51:39.228112 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-03 00:51:39.228117 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.171) 0:00:43.538 ********* 2026-03-03 00:51:39.228121 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.228126 | orchestrator | 2026-03-03 00:51:39.228130 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-03 00:51:39.228135 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.152) 0:00:43.691 ********* 2026-03-03 00:51:39.228139 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.228143 | orchestrator | 2026-03-03 00:51:39.228148 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-03 00:51:39.228156 | orchestrator | Tuesday 03 March 2026 00:51:38 +0000 (0:00:00.152) 0:00:43.843 ********* 2026-03-03 00:51:39.228160 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.228164 | orchestrator | 2026-03-03 00:51:39.228169 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-03 00:51:39.228173 | orchestrator | Tuesday 03 March 2026 00:51:39 +0000 (0:00:00.147) 0:00:43.991 ********* 2026-03-03 00:51:39.228177 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:39.228182 | orchestrator | 2026-03-03 00:51:39.228190 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-03 00:51:44.127336 | orchestrator | Tuesday 03 March 2026 00:51:39 +0000 (0:00:00.150) 0:00:44.141 ********* 2026-03-03 00:51:44.128254 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128296 | orchestrator | 2026-03-03 00:51:44.128305 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-03 00:51:44.128312 | orchestrator | Tuesday 03 March 2026 00:51:39 +0000 (0:00:00.361) 0:00:44.503 ********* 2026-03-03 00:51:44.128320 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128324 | orchestrator | 2026-03-03 00:51:44.128328 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-03 00:51:44.128333 | orchestrator | Tuesday 03 March 2026 00:51:39 +0000 (0:00:00.145) 0:00:44.648 ********* 2026-03-03 00:51:44.128337 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128341 | orchestrator | 2026-03-03 00:51:44.128345 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-03 00:51:44.128349 | orchestrator | Tuesday 03 March 2026 00:51:39 +0000 (0:00:00.155) 0:00:44.803 ********* 2026-03-03 00:51:44.128353 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128356 | orchestrator | 2026-03-03 00:51:44.128361 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-03 00:51:44.128364 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.149) 0:00:44.952 ********* 2026-03-03 00:51:44.128368 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128372 | orchestrator | 2026-03-03 00:51:44.128376 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-03 00:51:44.128380 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.170) 0:00:45.123 ********* 2026-03-03 00:51:44.128384 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128388 | orchestrator | 2026-03-03 00:51:44.128392 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-03 00:51:44.128395 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.157) 0:00:45.280 ********* 2026-03-03 00:51:44.128399 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128403 | orchestrator | 2026-03-03 00:51:44.128407 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-03 00:51:44.128411 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.151) 0:00:45.432 ********* 2026-03-03 00:51:44.128415 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128419 | orchestrator | 2026-03-03 00:51:44.128423 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-03 00:51:44.128427 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.154) 0:00:45.586 ********* 2026-03-03 00:51:44.128431 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128435 | orchestrator | 2026-03-03 00:51:44.128439 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-03 00:51:44.128443 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.155) 0:00:45.742 ********* 2026-03-03 00:51:44.128447 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128451 | orchestrator | 2026-03-03 00:51:44.128455 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-03 00:51:44.128459 | orchestrator | Tuesday 03 March 2026 00:51:40 +0000 (0:00:00.143) 0:00:45.886 ********* 2026-03-03 00:51:44.128464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128497 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128501 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128505 | orchestrator | 2026-03-03 00:51:44.128509 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-03 00:51:44.128513 | orchestrator | Tuesday 03 March 2026 00:51:41 +0000 (0:00:00.168) 0:00:46.054 ********* 2026-03-03 00:51:44.128517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128521 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128525 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128529 | orchestrator | 2026-03-03 00:51:44.128533 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-03 00:51:44.128536 | orchestrator | Tuesday 03 March 2026 00:51:41 +0000 (0:00:00.154) 0:00:46.208 ********* 2026-03-03 00:51:44.128540 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128548 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128551 | orchestrator | 2026-03-03 00:51:44.128555 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-03 00:51:44.128559 | orchestrator | Tuesday 03 March 2026 00:51:41 +0000 (0:00:00.363) 0:00:46.572 ********* 2026-03-03 00:51:44.128563 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128571 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128574 | orchestrator | 2026-03-03 00:51:44.128594 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-03 00:51:44.128598 | orchestrator | Tuesday 03 March 2026 00:51:41 +0000 (0:00:00.161) 0:00:46.733 ********* 2026-03-03 00:51:44.128602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128606 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128610 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128614 | orchestrator | 2026-03-03 00:51:44.128618 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-03 00:51:44.128622 | orchestrator | Tuesday 03 March 2026 00:51:41 +0000 (0:00:00.176) 0:00:46.910 ********* 2026-03-03 00:51:44.128625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128633 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128637 | orchestrator | 2026-03-03 00:51:44.128641 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-03 00:51:44.128644 | orchestrator | Tuesday 03 March 2026 00:51:42 +0000 (0:00:00.166) 0:00:47.076 ********* 2026-03-03 00:51:44.128648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128665 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128668 | orchestrator | 2026-03-03 00:51:44.128672 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-03 00:51:44.128676 | orchestrator | Tuesday 03 March 2026 00:51:42 +0000 (0:00:00.160) 0:00:47.237 ********* 2026-03-03 00:51:44.128680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128684 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128687 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128692 | orchestrator | 2026-03-03 00:51:44.128698 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-03 00:51:44.128704 | orchestrator | Tuesday 03 March 2026 00:51:42 +0000 (0:00:00.162) 0:00:47.399 ********* 2026-03-03 00:51:44.128709 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:44.128720 | orchestrator | 2026-03-03 00:51:44.128727 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-03 00:51:44.128732 | orchestrator | Tuesday 03 March 2026 00:51:43 +0000 (0:00:00.537) 0:00:47.936 ********* 2026-03-03 00:51:44.128738 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:44.128744 | orchestrator | 2026-03-03 00:51:44.128749 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-03 00:51:44.128755 | orchestrator | Tuesday 03 March 2026 00:51:43 +0000 (0:00:00.555) 0:00:48.492 ********* 2026-03-03 00:51:44.128760 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:51:44.128766 | orchestrator | 2026-03-03 00:51:44.128771 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-03 00:51:44.128777 | orchestrator | Tuesday 03 March 2026 00:51:43 +0000 (0:00:00.141) 0:00:48.634 ********* 2026-03-03 00:51:44.128784 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'vg_name': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'}) 2026-03-03 00:51:44.128791 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'vg_name': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}) 2026-03-03 00:51:44.128797 | orchestrator | 2026-03-03 00:51:44.128803 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-03 00:51:44.128810 | orchestrator | Tuesday 03 March 2026 00:51:43 +0000 (0:00:00.164) 0:00:48.798 ********* 2026-03-03 00:51:44.128816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:44.128828 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:44.128844 | orchestrator | 2026-03-03 00:51:44.128851 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-03 00:51:44.128863 | orchestrator | Tuesday 03 March 2026 00:51:44 +0000 (0:00:00.159) 0:00:48.958 ********* 2026-03-03 00:51:44.128869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:44.128880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:50.069686 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:50.069768 | orchestrator | 2026-03-03 00:51:50.069796 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-03 00:51:50.069805 | orchestrator | Tuesday 03 March 2026 00:51:44 +0000 (0:00:00.169) 0:00:49.127 ********* 2026-03-03 00:51:50.069812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'})  2026-03-03 00:51:50.069821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'})  2026-03-03 00:51:50.069827 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:51:50.069833 | orchestrator | 2026-03-03 00:51:50.069840 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-03 00:51:50.069846 | orchestrator | Tuesday 03 March 2026 00:51:44 +0000 (0:00:00.152) 0:00:49.279 ********* 2026-03-03 00:51:50.069851 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 00:51:50.069857 | orchestrator |  "lvm_report": { 2026-03-03 00:51:50.069864 | orchestrator |  "lv": [ 2026-03-03 00:51:50.069871 | orchestrator |  { 2026-03-03 00:51:50.069877 | orchestrator |  "lv_name": "osd-block-60a17889-adeb-5df5-a11b-dee290996ccf", 2026-03-03 00:51:50.069885 | orchestrator |  "vg_name": "ceph-60a17889-adeb-5df5-a11b-dee290996ccf" 2026-03-03 00:51:50.069891 | orchestrator |  }, 2026-03-03 00:51:50.069897 | orchestrator |  { 2026-03-03 00:51:50.069904 | orchestrator |  "lv_name": "osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd", 2026-03-03 00:51:50.069910 | orchestrator |  "vg_name": "ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd" 2026-03-03 00:51:50.069916 | orchestrator |  } 2026-03-03 00:51:50.069922 | orchestrator |  ], 2026-03-03 00:51:50.069928 | orchestrator |  "pv": [ 2026-03-03 00:51:50.069934 | orchestrator |  { 2026-03-03 00:51:50.069940 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-03 00:51:50.069959 | orchestrator |  "vg_name": "ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd" 2026-03-03 00:51:50.069965 | orchestrator |  }, 2026-03-03 00:51:50.070067 | orchestrator |  { 2026-03-03 00:51:50.070075 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-03 00:51:50.070082 | orchestrator |  "vg_name": "ceph-60a17889-adeb-5df5-a11b-dee290996ccf" 2026-03-03 00:51:50.070089 | orchestrator |  } 2026-03-03 00:51:50.070096 | orchestrator |  ] 2026-03-03 00:51:50.070102 | orchestrator |  } 2026-03-03 00:51:50.070109 | orchestrator | } 2026-03-03 00:51:50.070116 | orchestrator | 2026-03-03 00:51:50.070123 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-03 00:51:50.070130 | orchestrator | 2026-03-03 00:51:50.070136 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-03 00:51:50.070143 | orchestrator | Tuesday 03 March 2026 00:51:44 +0000 (0:00:00.514) 0:00:49.793 ********* 2026-03-03 00:51:50.070149 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-03 00:51:50.070155 | orchestrator | 2026-03-03 00:51:50.070162 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-03 00:51:50.070169 | orchestrator | Tuesday 03 March 2026 00:51:45 +0000 (0:00:00.258) 0:00:50.052 ********* 2026-03-03 00:51:50.070175 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:51:50.070182 | orchestrator | 2026-03-03 00:51:50.070188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070195 | orchestrator | Tuesday 03 March 2026 00:51:45 +0000 (0:00:00.242) 0:00:50.295 ********* 2026-03-03 00:51:50.070202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-03 00:51:50.070208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-03 00:51:50.070215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-03 00:51:50.070222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-03 00:51:50.070238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-03 00:51:50.070245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-03 00:51:50.070251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-03 00:51:50.070258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-03 00:51:50.070265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-03 00:51:50.070275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-03 00:51:50.070281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-03 00:51:50.070287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-03 00:51:50.070294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-03 00:51:50.070300 | orchestrator | 2026-03-03 00:51:50.070307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070314 | orchestrator | Tuesday 03 March 2026 00:51:45 +0000 (0:00:00.399) 0:00:50.694 ********* 2026-03-03 00:51:50.070320 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070327 | orchestrator | 2026-03-03 00:51:50.070333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070339 | orchestrator | Tuesday 03 March 2026 00:51:45 +0000 (0:00:00.205) 0:00:50.900 ********* 2026-03-03 00:51:50.070346 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070353 | orchestrator | 2026-03-03 00:51:50.070359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070382 | orchestrator | Tuesday 03 March 2026 00:51:46 +0000 (0:00:00.199) 0:00:51.099 ********* 2026-03-03 00:51:50.070389 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070395 | orchestrator | 2026-03-03 00:51:50.070401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070406 | orchestrator | Tuesday 03 March 2026 00:51:46 +0000 (0:00:00.211) 0:00:51.311 ********* 2026-03-03 00:51:50.070412 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070418 | orchestrator | 2026-03-03 00:51:50.070424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070429 | orchestrator | Tuesday 03 March 2026 00:51:46 +0000 (0:00:00.195) 0:00:51.507 ********* 2026-03-03 00:51:50.070435 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070441 | orchestrator | 2026-03-03 00:51:50.070447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070452 | orchestrator | Tuesday 03 March 2026 00:51:47 +0000 (0:00:00.620) 0:00:52.127 ********* 2026-03-03 00:51:50.070458 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070463 | orchestrator | 2026-03-03 00:51:50.070469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070475 | orchestrator | Tuesday 03 March 2026 00:51:47 +0000 (0:00:00.195) 0:00:52.323 ********* 2026-03-03 00:51:50.070480 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070486 | orchestrator | 2026-03-03 00:51:50.070492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070498 | orchestrator | Tuesday 03 March 2026 00:51:47 +0000 (0:00:00.197) 0:00:52.521 ********* 2026-03-03 00:51:50.070504 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:50.070510 | orchestrator | 2026-03-03 00:51:50.070516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070522 | orchestrator | Tuesday 03 March 2026 00:51:47 +0000 (0:00:00.199) 0:00:52.720 ********* 2026-03-03 00:51:50.070529 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8) 2026-03-03 00:51:50.070536 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8) 2026-03-03 00:51:50.070548 | orchestrator | 2026-03-03 00:51:50.070554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070560 | orchestrator | Tuesday 03 March 2026 00:51:48 +0000 (0:00:00.422) 0:00:53.142 ********* 2026-03-03 00:51:50.070566 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361) 2026-03-03 00:51:50.070572 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361) 2026-03-03 00:51:50.070577 | orchestrator | 2026-03-03 00:51:50.070583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070589 | orchestrator | Tuesday 03 March 2026 00:51:48 +0000 (0:00:00.425) 0:00:53.568 ********* 2026-03-03 00:51:50.070594 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301) 2026-03-03 00:51:50.070600 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301) 2026-03-03 00:51:50.070606 | orchestrator | 2026-03-03 00:51:50.070612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070618 | orchestrator | Tuesday 03 March 2026 00:51:49 +0000 (0:00:00.409) 0:00:53.978 ********* 2026-03-03 00:51:50.070624 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53) 2026-03-03 00:51:50.070630 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53) 2026-03-03 00:51:50.070635 | orchestrator | 2026-03-03 00:51:50.070641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-03 00:51:50.070647 | orchestrator | Tuesday 03 March 2026 00:51:49 +0000 (0:00:00.414) 0:00:54.392 ********* 2026-03-03 00:51:50.070653 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-03 00:51:50.070658 | orchestrator | 2026-03-03 00:51:50.070664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:50.070670 | orchestrator | Tuesday 03 March 2026 00:51:49 +0000 (0:00:00.293) 0:00:54.685 ********* 2026-03-03 00:51:50.070676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-03 00:51:50.070682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-03 00:51:50.070688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-03 00:51:50.070694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-03 00:51:50.070700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-03 00:51:50.070705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-03 00:51:50.070711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-03 00:51:50.070717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-03 00:51:50.070722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-03 00:51:50.070728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-03 00:51:50.070734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-03 00:51:50.070745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-03 00:51:58.402534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-03 00:51:58.402637 | orchestrator | 2026-03-03 00:51:58.402647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402655 | orchestrator | Tuesday 03 March 2026 00:51:50 +0000 (0:00:00.373) 0:00:55.059 ********* 2026-03-03 00:51:58.402676 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402683 | orchestrator | 2026-03-03 00:51:58.402690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402696 | orchestrator | Tuesday 03 March 2026 00:51:50 +0000 (0:00:00.180) 0:00:55.239 ********* 2026-03-03 00:51:58.402702 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402708 | orchestrator | 2026-03-03 00:51:58.402753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402760 | orchestrator | Tuesday 03 March 2026 00:51:50 +0000 (0:00:00.493) 0:00:55.732 ********* 2026-03-03 00:51:58.402766 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402772 | orchestrator | 2026-03-03 00:51:58.402778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402784 | orchestrator | Tuesday 03 March 2026 00:51:51 +0000 (0:00:00.187) 0:00:55.920 ********* 2026-03-03 00:51:58.402790 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402796 | orchestrator | 2026-03-03 00:51:58.402802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402808 | orchestrator | Tuesday 03 March 2026 00:51:51 +0000 (0:00:00.181) 0:00:56.102 ********* 2026-03-03 00:51:58.402814 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402820 | orchestrator | 2026-03-03 00:51:58.402826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402831 | orchestrator | Tuesday 03 March 2026 00:51:51 +0000 (0:00:00.185) 0:00:56.287 ********* 2026-03-03 00:51:58.402837 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402843 | orchestrator | 2026-03-03 00:51:58.402861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402867 | orchestrator | Tuesday 03 March 2026 00:51:51 +0000 (0:00:00.179) 0:00:56.467 ********* 2026-03-03 00:51:58.402873 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402879 | orchestrator | 2026-03-03 00:51:58.402885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402891 | orchestrator | Tuesday 03 March 2026 00:51:51 +0000 (0:00:00.183) 0:00:56.650 ********* 2026-03-03 00:51:58.402897 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402903 | orchestrator | 2026-03-03 00:51:58.402909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402915 | orchestrator | Tuesday 03 March 2026 00:51:51 +0000 (0:00:00.190) 0:00:56.841 ********* 2026-03-03 00:51:58.402921 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-03 00:51:58.402934 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-03 00:51:58.402941 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-03 00:51:58.402947 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-03 00:51:58.402972 | orchestrator | 2026-03-03 00:51:58.402978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.402984 | orchestrator | Tuesday 03 March 2026 00:51:52 +0000 (0:00:00.596) 0:00:57.437 ********* 2026-03-03 00:51:58.402990 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.402996 | orchestrator | 2026-03-03 00:51:58.403002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.403008 | orchestrator | Tuesday 03 March 2026 00:51:52 +0000 (0:00:00.186) 0:00:57.624 ********* 2026-03-03 00:51:58.403014 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403020 | orchestrator | 2026-03-03 00:51:58.403025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.403031 | orchestrator | Tuesday 03 March 2026 00:51:52 +0000 (0:00:00.167) 0:00:57.792 ********* 2026-03-03 00:51:58.403037 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403043 | orchestrator | 2026-03-03 00:51:58.403049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-03 00:51:58.403055 | orchestrator | Tuesday 03 March 2026 00:51:53 +0000 (0:00:00.171) 0:00:57.963 ********* 2026-03-03 00:51:58.403068 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403074 | orchestrator | 2026-03-03 00:51:58.403080 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-03 00:51:58.403086 | orchestrator | Tuesday 03 March 2026 00:51:53 +0000 (0:00:00.209) 0:00:58.172 ********* 2026-03-03 00:51:58.403092 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403097 | orchestrator | 2026-03-03 00:51:58.403103 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-03 00:51:58.403109 | orchestrator | Tuesday 03 March 2026 00:51:53 +0000 (0:00:00.241) 0:00:58.414 ********* 2026-03-03 00:51:58.403115 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f7865f1e-8b85-57a7-a15d-91986b577cab'}}) 2026-03-03 00:51:58.403122 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b901fd44-5489-5e25-a5fe-b820905f87a1'}}) 2026-03-03 00:51:58.403127 | orchestrator | 2026-03-03 00:51:58.403133 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-03 00:51:58.403139 | orchestrator | Tuesday 03 March 2026 00:51:53 +0000 (0:00:00.172) 0:00:58.587 ********* 2026-03-03 00:51:58.403147 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'}) 2026-03-03 00:51:58.403154 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'}) 2026-03-03 00:51:58.403160 | orchestrator | 2026-03-03 00:51:58.403166 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-03 00:51:58.403184 | orchestrator | Tuesday 03 March 2026 00:51:55 +0000 (0:00:01.783) 0:01:00.370 ********* 2026-03-03 00:51:58.403190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:51:58.403197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:51:58.403203 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403209 | orchestrator | 2026-03-03 00:51:58.403215 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-03 00:51:58.403221 | orchestrator | Tuesday 03 March 2026 00:51:55 +0000 (0:00:00.137) 0:01:00.507 ********* 2026-03-03 00:51:58.403227 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'}) 2026-03-03 00:51:58.403233 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'}) 2026-03-03 00:51:58.403239 | orchestrator | 2026-03-03 00:51:58.403245 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-03 00:51:58.403251 | orchestrator | Tuesday 03 March 2026 00:51:56 +0000 (0:00:01.405) 0:01:01.912 ********* 2026-03-03 00:51:58.403257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:51:58.403263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:51:58.403272 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403278 | orchestrator | 2026-03-03 00:51:58.403284 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-03 00:51:58.403290 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.131) 0:01:02.044 ********* 2026-03-03 00:51:58.403296 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403302 | orchestrator | 2026-03-03 00:51:58.403308 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-03 00:51:58.403314 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.124) 0:01:02.169 ********* 2026-03-03 00:51:58.403327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:51:58.403333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:51:58.403339 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403345 | orchestrator | 2026-03-03 00:51:58.403351 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-03 00:51:58.403357 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.138) 0:01:02.308 ********* 2026-03-03 00:51:58.403363 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403369 | orchestrator | 2026-03-03 00:51:58.403375 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-03 00:51:58.403381 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.127) 0:01:02.435 ********* 2026-03-03 00:51:58.403387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:51:58.403393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:51:58.403399 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403405 | orchestrator | 2026-03-03 00:51:58.403411 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-03 00:51:58.403417 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.161) 0:01:02.596 ********* 2026-03-03 00:51:58.403422 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403428 | orchestrator | 2026-03-03 00:51:58.403434 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-03 00:51:58.403440 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.134) 0:01:02.731 ********* 2026-03-03 00:51:58.403446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:51:58.403452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:51:58.403458 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:51:58.403464 | orchestrator | 2026-03-03 00:51:58.403470 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-03 00:51:58.403476 | orchestrator | Tuesday 03 March 2026 00:51:57 +0000 (0:00:00.151) 0:01:02.882 ********* 2026-03-03 00:51:58.403482 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:51:58.403488 | orchestrator | 2026-03-03 00:51:58.403494 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-03 00:51:58.403500 | orchestrator | Tuesday 03 March 2026 00:51:58 +0000 (0:00:00.363) 0:01:03.246 ********* 2026-03-03 00:51:58.403510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:04.631727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:04.631811 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.631821 | orchestrator | 2026-03-03 00:52:04.631828 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-03 00:52:04.631835 | orchestrator | Tuesday 03 March 2026 00:51:58 +0000 (0:00:00.159) 0:01:03.406 ********* 2026-03-03 00:52:04.631841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:04.631847 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:04.631871 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.631879 | orchestrator | 2026-03-03 00:52:04.631888 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-03 00:52:04.631896 | orchestrator | Tuesday 03 March 2026 00:51:58 +0000 (0:00:00.164) 0:01:03.570 ********* 2026-03-03 00:52:04.631904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:04.631913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:04.631921 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.631929 | orchestrator | 2026-03-03 00:52:04.631978 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-03 00:52:04.631996 | orchestrator | Tuesday 03 March 2026 00:51:58 +0000 (0:00:00.156) 0:01:03.726 ********* 2026-03-03 00:52:04.632002 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632007 | orchestrator | 2026-03-03 00:52:04.632012 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-03 00:52:04.632017 | orchestrator | Tuesday 03 March 2026 00:51:58 +0000 (0:00:00.133) 0:01:03.860 ********* 2026-03-03 00:52:04.632022 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632028 | orchestrator | 2026-03-03 00:52:04.632033 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-03 00:52:04.632038 | orchestrator | Tuesday 03 March 2026 00:51:59 +0000 (0:00:00.137) 0:01:03.998 ********* 2026-03-03 00:52:04.632043 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632048 | orchestrator | 2026-03-03 00:52:04.632053 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-03 00:52:04.632059 | orchestrator | Tuesday 03 March 2026 00:51:59 +0000 (0:00:00.142) 0:01:04.140 ********* 2026-03-03 00:52:04.632064 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 00:52:04.632070 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-03 00:52:04.632075 | orchestrator | } 2026-03-03 00:52:04.632081 | orchestrator | 2026-03-03 00:52:04.632086 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-03 00:52:04.632092 | orchestrator | Tuesday 03 March 2026 00:51:59 +0000 (0:00:00.134) 0:01:04.275 ********* 2026-03-03 00:52:04.632097 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 00:52:04.632102 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-03 00:52:04.632107 | orchestrator | } 2026-03-03 00:52:04.632112 | orchestrator | 2026-03-03 00:52:04.632118 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-03 00:52:04.632123 | orchestrator | Tuesday 03 March 2026 00:51:59 +0000 (0:00:00.132) 0:01:04.407 ********* 2026-03-03 00:52:04.632128 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 00:52:04.632133 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-03 00:52:04.632138 | orchestrator | } 2026-03-03 00:52:04.632144 | orchestrator | 2026-03-03 00:52:04.632149 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-03 00:52:04.632154 | orchestrator | Tuesday 03 March 2026 00:51:59 +0000 (0:00:00.153) 0:01:04.560 ********* 2026-03-03 00:52:04.632159 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:04.632164 | orchestrator | 2026-03-03 00:52:04.632169 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-03 00:52:04.632175 | orchestrator | Tuesday 03 March 2026 00:52:00 +0000 (0:00:00.570) 0:01:05.131 ********* 2026-03-03 00:52:04.632180 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:04.632185 | orchestrator | 2026-03-03 00:52:04.632191 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-03 00:52:04.632196 | orchestrator | Tuesday 03 March 2026 00:52:00 +0000 (0:00:00.568) 0:01:05.699 ********* 2026-03-03 00:52:04.632201 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:04.632214 | orchestrator | 2026-03-03 00:52:04.632219 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-03 00:52:04.632224 | orchestrator | Tuesday 03 March 2026 00:52:01 +0000 (0:00:00.784) 0:01:06.484 ********* 2026-03-03 00:52:04.632229 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:04.632234 | orchestrator | 2026-03-03 00:52:04.632239 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-03 00:52:04.632245 | orchestrator | Tuesday 03 March 2026 00:52:01 +0000 (0:00:00.144) 0:01:06.628 ********* 2026-03-03 00:52:04.632250 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632255 | orchestrator | 2026-03-03 00:52:04.632260 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-03 00:52:04.632265 | orchestrator | Tuesday 03 March 2026 00:52:01 +0000 (0:00:00.104) 0:01:06.733 ********* 2026-03-03 00:52:04.632270 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632277 | orchestrator | 2026-03-03 00:52:04.632283 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-03 00:52:04.632289 | orchestrator | Tuesday 03 March 2026 00:52:01 +0000 (0:00:00.102) 0:01:06.836 ********* 2026-03-03 00:52:04.632295 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 00:52:04.632308 | orchestrator |  "vgs_report": { 2026-03-03 00:52:04.632315 | orchestrator |  "vg": [] 2026-03-03 00:52:04.632341 | orchestrator |  } 2026-03-03 00:52:04.632348 | orchestrator | } 2026-03-03 00:52:04.632354 | orchestrator | 2026-03-03 00:52:04.632361 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-03 00:52:04.632367 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.145) 0:01:06.981 ********* 2026-03-03 00:52:04.632373 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632379 | orchestrator | 2026-03-03 00:52:04.632385 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-03 00:52:04.632391 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.153) 0:01:07.135 ********* 2026-03-03 00:52:04.632397 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632404 | orchestrator | 2026-03-03 00:52:04.632409 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-03 00:52:04.632414 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.126) 0:01:07.262 ********* 2026-03-03 00:52:04.632420 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632425 | orchestrator | 2026-03-03 00:52:04.632430 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-03 00:52:04.632436 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.136) 0:01:07.398 ********* 2026-03-03 00:52:04.632441 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632446 | orchestrator | 2026-03-03 00:52:04.632451 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-03 00:52:04.632457 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.150) 0:01:07.549 ********* 2026-03-03 00:52:04.632462 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632467 | orchestrator | 2026-03-03 00:52:04.632472 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-03 00:52:04.632477 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.134) 0:01:07.684 ********* 2026-03-03 00:52:04.632482 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632488 | orchestrator | 2026-03-03 00:52:04.632493 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-03 00:52:04.632498 | orchestrator | Tuesday 03 March 2026 00:52:02 +0000 (0:00:00.138) 0:01:07.822 ********* 2026-03-03 00:52:04.632503 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632509 | orchestrator | 2026-03-03 00:52:04.632514 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-03 00:52:04.632519 | orchestrator | Tuesday 03 March 2026 00:52:03 +0000 (0:00:00.139) 0:01:07.962 ********* 2026-03-03 00:52:04.632525 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632530 | orchestrator | 2026-03-03 00:52:04.632535 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-03 00:52:04.632546 | orchestrator | Tuesday 03 March 2026 00:52:03 +0000 (0:00:00.348) 0:01:08.311 ********* 2026-03-03 00:52:04.632552 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632561 | orchestrator | 2026-03-03 00:52:04.632569 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-03 00:52:04.632577 | orchestrator | Tuesday 03 March 2026 00:52:03 +0000 (0:00:00.133) 0:01:08.444 ********* 2026-03-03 00:52:04.632585 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632593 | orchestrator | 2026-03-03 00:52:04.632601 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-03 00:52:04.632609 | orchestrator | Tuesday 03 March 2026 00:52:03 +0000 (0:00:00.129) 0:01:08.574 ********* 2026-03-03 00:52:04.632617 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632625 | orchestrator | 2026-03-03 00:52:04.632633 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-03 00:52:04.632642 | orchestrator | Tuesday 03 March 2026 00:52:03 +0000 (0:00:00.143) 0:01:08.718 ********* 2026-03-03 00:52:04.632650 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632658 | orchestrator | 2026-03-03 00:52:04.632667 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-03 00:52:04.632672 | orchestrator | Tuesday 03 March 2026 00:52:03 +0000 (0:00:00.147) 0:01:08.865 ********* 2026-03-03 00:52:04.632678 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632683 | orchestrator | 2026-03-03 00:52:04.632688 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-03 00:52:04.632694 | orchestrator | Tuesday 03 March 2026 00:52:04 +0000 (0:00:00.140) 0:01:09.006 ********* 2026-03-03 00:52:04.632699 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632704 | orchestrator | 2026-03-03 00:52:04.632710 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-03 00:52:04.632715 | orchestrator | Tuesday 03 March 2026 00:52:04 +0000 (0:00:00.143) 0:01:09.150 ********* 2026-03-03 00:52:04.632720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:04.632726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:04.632732 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632737 | orchestrator | 2026-03-03 00:52:04.632742 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-03 00:52:04.632747 | orchestrator | Tuesday 03 March 2026 00:52:04 +0000 (0:00:00.147) 0:01:09.298 ********* 2026-03-03 00:52:04.632752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:04.632758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:04.632763 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:04.632768 | orchestrator | 2026-03-03 00:52:04.632773 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-03 00:52:04.632779 | orchestrator | Tuesday 03 March 2026 00:52:04 +0000 (0:00:00.172) 0:01:09.470 ********* 2026-03-03 00:52:04.632788 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.651734 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.651837 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.651845 | orchestrator | 2026-03-03 00:52:07.651850 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-03 00:52:07.651857 | orchestrator | Tuesday 03 March 2026 00:52:04 +0000 (0:00:00.163) 0:01:09.634 ********* 2026-03-03 00:52:07.651898 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.651903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.651907 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.651911 | orchestrator | 2026-03-03 00:52:07.651916 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-03 00:52:07.651920 | orchestrator | Tuesday 03 March 2026 00:52:04 +0000 (0:00:00.140) 0:01:09.774 ********* 2026-03-03 00:52:07.651924 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.651987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.651995 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652003 | orchestrator | 2026-03-03 00:52:07.652013 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-03 00:52:07.652020 | orchestrator | Tuesday 03 March 2026 00:52:05 +0000 (0:00:00.162) 0:01:09.937 ********* 2026-03-03 00:52:07.652026 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.652032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.652039 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652046 | orchestrator | 2026-03-03 00:52:07.652053 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-03 00:52:07.652061 | orchestrator | Tuesday 03 March 2026 00:52:05 +0000 (0:00:00.358) 0:01:10.296 ********* 2026-03-03 00:52:07.652067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.652075 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.652080 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652084 | orchestrator | 2026-03-03 00:52:07.652088 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-03 00:52:07.652092 | orchestrator | Tuesday 03 March 2026 00:52:05 +0000 (0:00:00.185) 0:01:10.481 ********* 2026-03-03 00:52:07.652095 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.652099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.652103 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652107 | orchestrator | 2026-03-03 00:52:07.652110 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-03 00:52:07.652114 | orchestrator | Tuesday 03 March 2026 00:52:05 +0000 (0:00:00.154) 0:01:10.635 ********* 2026-03-03 00:52:07.652118 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:07.652123 | orchestrator | 2026-03-03 00:52:07.652127 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-03 00:52:07.652131 | orchestrator | Tuesday 03 March 2026 00:52:06 +0000 (0:00:00.558) 0:01:11.194 ********* 2026-03-03 00:52:07.652135 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:07.652138 | orchestrator | 2026-03-03 00:52:07.652142 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-03 00:52:07.652152 | orchestrator | Tuesday 03 March 2026 00:52:06 +0000 (0:00:00.515) 0:01:11.710 ********* 2026-03-03 00:52:07.652156 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:07.652160 | orchestrator | 2026-03-03 00:52:07.652163 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-03 00:52:07.652167 | orchestrator | Tuesday 03 March 2026 00:52:06 +0000 (0:00:00.133) 0:01:11.843 ********* 2026-03-03 00:52:07.652171 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'vg_name': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'}) 2026-03-03 00:52:07.652176 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'vg_name': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'}) 2026-03-03 00:52:07.652180 | orchestrator | 2026-03-03 00:52:07.652184 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-03 00:52:07.652188 | orchestrator | Tuesday 03 March 2026 00:52:07 +0000 (0:00:00.144) 0:01:11.988 ********* 2026-03-03 00:52:07.652206 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.652211 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.652214 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652218 | orchestrator | 2026-03-03 00:52:07.652222 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-03 00:52:07.652226 | orchestrator | Tuesday 03 March 2026 00:52:07 +0000 (0:00:00.165) 0:01:12.154 ********* 2026-03-03 00:52:07.652230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.652234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.652237 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652241 | orchestrator | 2026-03-03 00:52:07.652245 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-03 00:52:07.652249 | orchestrator | Tuesday 03 March 2026 00:52:07 +0000 (0:00:00.141) 0:01:12.295 ********* 2026-03-03 00:52:07.652253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'})  2026-03-03 00:52:07.652260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'})  2026-03-03 00:52:07.652264 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:07.652268 | orchestrator | 2026-03-03 00:52:07.652271 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-03 00:52:07.652275 | orchestrator | Tuesday 03 March 2026 00:52:07 +0000 (0:00:00.139) 0:01:12.435 ********* 2026-03-03 00:52:07.652279 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 00:52:07.652283 | orchestrator |  "lvm_report": { 2026-03-03 00:52:07.652287 | orchestrator |  "lv": [ 2026-03-03 00:52:07.652291 | orchestrator |  { 2026-03-03 00:52:07.652295 | orchestrator |  "lv_name": "osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1", 2026-03-03 00:52:07.652302 | orchestrator |  "vg_name": "ceph-b901fd44-5489-5e25-a5fe-b820905f87a1" 2026-03-03 00:52:07.652308 | orchestrator |  }, 2026-03-03 00:52:07.652317 | orchestrator |  { 2026-03-03 00:52:07.652324 | orchestrator |  "lv_name": "osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab", 2026-03-03 00:52:07.652330 | orchestrator |  "vg_name": "ceph-f7865f1e-8b85-57a7-a15d-91986b577cab" 2026-03-03 00:52:07.652336 | orchestrator |  } 2026-03-03 00:52:07.652342 | orchestrator |  ], 2026-03-03 00:52:07.652348 | orchestrator |  "pv": [ 2026-03-03 00:52:07.652361 | orchestrator |  { 2026-03-03 00:52:07.652367 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-03 00:52:07.652374 | orchestrator |  "vg_name": "ceph-f7865f1e-8b85-57a7-a15d-91986b577cab" 2026-03-03 00:52:07.652380 | orchestrator |  }, 2026-03-03 00:52:07.652384 | orchestrator |  { 2026-03-03 00:52:07.652388 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-03 00:52:07.652392 | orchestrator |  "vg_name": "ceph-b901fd44-5489-5e25-a5fe-b820905f87a1" 2026-03-03 00:52:07.652395 | orchestrator |  } 2026-03-03 00:52:07.652399 | orchestrator |  ] 2026-03-03 00:52:07.652403 | orchestrator |  } 2026-03-03 00:52:07.652407 | orchestrator | } 2026-03-03 00:52:07.652411 | orchestrator | 2026-03-03 00:52:07.652415 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:52:07.652418 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-03 00:52:07.652422 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-03 00:52:07.652426 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-03 00:52:07.652430 | orchestrator | 2026-03-03 00:52:07.652434 | orchestrator | 2026-03-03 00:52:07.652437 | orchestrator | 2026-03-03 00:52:07.652441 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:52:07.652445 | orchestrator | Tuesday 03 March 2026 00:52:07 +0000 (0:00:00.122) 0:01:12.557 ********* 2026-03-03 00:52:07.652449 | orchestrator | =============================================================================== 2026-03-03 00:52:07.652453 | orchestrator | Create block VGs -------------------------------------------------------- 5.61s 2026-03-03 00:52:07.652457 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2026-03-03 00:52:07.652461 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.94s 2026-03-03 00:52:07.652465 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2026-03-03 00:52:07.652468 | orchestrator | Add known partitions to the list of available block devices ------------- 1.74s 2026-03-03 00:52:07.652472 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.72s 2026-03-03 00:52:07.652476 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-03-03 00:52:07.652480 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2026-03-03 00:52:07.652487 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2026-03-03 00:52:07.923429 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2026-03-03 00:52:07.923554 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2026-03-03 00:52:07.923564 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-03 00:52:07.923571 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.73s 2026-03-03 00:52:07.923577 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2026-03-03 00:52:07.923584 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-03 00:52:07.923591 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.70s 2026-03-03 00:52:07.923599 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.70s 2026-03-03 00:52:07.923606 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-03 00:52:07.923610 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.69s 2026-03-03 00:52:07.923614 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-03-03 00:52:19.953423 | orchestrator | 2026-03-03 00:52:19 | INFO  | Prepare task for execution of facts. 2026-03-03 00:52:20.019974 | orchestrator | 2026-03-03 00:52:20 | INFO  | Task 9ce6a08e-c85b-4425-8e72-e9057afb306a (facts) was prepared for execution. 2026-03-03 00:52:20.020085 | orchestrator | 2026-03-03 00:52:20 | INFO  | It takes a moment until task 9ce6a08e-c85b-4425-8e72-e9057afb306a (facts) has been started and output is visible here. 2026-03-03 00:52:31.164172 | orchestrator | 2026-03-03 00:52:31.164291 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-03 00:52:31.164328 | orchestrator | 2026-03-03 00:52:31.164351 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-03 00:52:31.164363 | orchestrator | Tuesday 03 March 2026 00:52:23 +0000 (0:00:00.203) 0:00:00.203 ********* 2026-03-03 00:52:31.164375 | orchestrator | ok: [testbed-manager] 2026-03-03 00:52:31.164387 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:52:31.164399 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:52:31.164414 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:52:31.164433 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:52:31.164458 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:52:31.164480 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:31.164499 | orchestrator | 2026-03-03 00:52:31.164519 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-03 00:52:31.164537 | orchestrator | Tuesday 03 March 2026 00:52:24 +0000 (0:00:00.968) 0:00:01.171 ********* 2026-03-03 00:52:31.164557 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:52:31.164579 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:52:31.164598 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:52:31.164617 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:52:31.164637 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:52:31.164676 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:52:31.164709 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:31.164729 | orchestrator | 2026-03-03 00:52:31.164749 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-03 00:52:31.164770 | orchestrator | 2026-03-03 00:52:31.164789 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-03 00:52:31.164808 | orchestrator | Tuesday 03 March 2026 00:52:25 +0000 (0:00:01.060) 0:00:02.232 ********* 2026-03-03 00:52:31.164827 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:52:31.164846 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:52:31.164866 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:52:31.164940 | orchestrator | ok: [testbed-manager] 2026-03-03 00:52:31.164959 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:52:31.164979 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:52:31.164995 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:52:31.165008 | orchestrator | 2026-03-03 00:52:31.165021 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-03 00:52:31.165060 | orchestrator | 2026-03-03 00:52:31.165090 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-03 00:52:31.165108 | orchestrator | Tuesday 03 March 2026 00:52:30 +0000 (0:00:04.507) 0:00:06.740 ********* 2026-03-03 00:52:31.165124 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:52:31.165141 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:52:31.165159 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:52:31.165176 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:52:31.165194 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:52:31.165210 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:52:31.165228 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:52:31.165245 | orchestrator | 2026-03-03 00:52:31.165283 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:52:31.165316 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165336 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165392 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165411 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165429 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165448 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165467 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:52:31.165484 | orchestrator | 2026-03-03 00:52:31.165501 | orchestrator | 2026-03-03 00:52:31.165518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:52:31.165537 | orchestrator | Tuesday 03 March 2026 00:52:30 +0000 (0:00:00.465) 0:00:07.205 ********* 2026-03-03 00:52:31.165557 | orchestrator | =============================================================================== 2026-03-03 00:52:31.165575 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.51s 2026-03-03 00:52:31.165594 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-03-03 00:52:31.165612 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2026-03-03 00:52:31.165631 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-03 00:52:43.227218 | orchestrator | 2026-03-03 00:52:43 | INFO  | Prepare task for execution of frr. 2026-03-03 00:52:43.295826 | orchestrator | 2026-03-03 00:52:43 | INFO  | Task 5f9ef481-167a-47bd-ab38-46299acfb103 (frr) was prepared for execution. 2026-03-03 00:52:43.296026 | orchestrator | 2026-03-03 00:52:43 | INFO  | It takes a moment until task 5f9ef481-167a-47bd-ab38-46299acfb103 (frr) has been started and output is visible here. 2026-03-03 00:53:06.082987 | orchestrator | 2026-03-03 00:53:06.083091 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-03 00:53:06.083103 | orchestrator | 2026-03-03 00:53:06.083110 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-03 00:53:06.083117 | orchestrator | Tuesday 03 March 2026 00:52:47 +0000 (0:00:00.213) 0:00:00.213 ********* 2026-03-03 00:53:06.083124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-03 00:53:06.083131 | orchestrator | 2026-03-03 00:53:06.083138 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-03 00:53:06.083144 | orchestrator | Tuesday 03 March 2026 00:52:47 +0000 (0:00:00.215) 0:00:00.429 ********* 2026-03-03 00:53:06.083150 | orchestrator | changed: [testbed-manager] 2026-03-03 00:53:06.083157 | orchestrator | 2026-03-03 00:53:06.083163 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-03 00:53:06.083170 | orchestrator | Tuesday 03 March 2026 00:52:48 +0000 (0:00:01.068) 0:00:01.497 ********* 2026-03-03 00:53:06.083176 | orchestrator | changed: [testbed-manager] 2026-03-03 00:53:06.083182 | orchestrator | 2026-03-03 00:53:06.083188 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-03 00:53:06.083194 | orchestrator | Tuesday 03 March 2026 00:52:56 +0000 (0:00:08.090) 0:00:09.588 ********* 2026-03-03 00:53:06.083200 | orchestrator | ok: [testbed-manager] 2026-03-03 00:53:06.083207 | orchestrator | 2026-03-03 00:53:06.083213 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-03 00:53:06.083219 | orchestrator | Tuesday 03 March 2026 00:52:57 +0000 (0:00:00.941) 0:00:10.530 ********* 2026-03-03 00:53:06.083225 | orchestrator | changed: [testbed-manager] 2026-03-03 00:53:06.083251 | orchestrator | 2026-03-03 00:53:06.083257 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-03 00:53:06.083263 | orchestrator | Tuesday 03 March 2026 00:52:58 +0000 (0:00:00.854) 0:00:11.384 ********* 2026-03-03 00:53:06.083270 | orchestrator | ok: [testbed-manager] 2026-03-03 00:53:06.083276 | orchestrator | 2026-03-03 00:53:06.083282 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-03 00:53:06.083288 | orchestrator | Tuesday 03 March 2026 00:52:59 +0000 (0:00:01.064) 0:00:12.449 ********* 2026-03-03 00:53:06.083294 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:53:06.083300 | orchestrator | 2026-03-03 00:53:06.083306 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-03 00:53:06.083311 | orchestrator | Tuesday 03 March 2026 00:52:59 +0000 (0:00:00.146) 0:00:12.595 ********* 2026-03-03 00:53:06.083317 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:53:06.083323 | orchestrator | 2026-03-03 00:53:06.083329 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-03 00:53:06.083335 | orchestrator | Tuesday 03 March 2026 00:52:59 +0000 (0:00:00.142) 0:00:12.738 ********* 2026-03-03 00:53:06.083341 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:53:06.083347 | orchestrator | 2026-03-03 00:53:06.083353 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-03 00:53:06.083359 | orchestrator | Tuesday 03 March 2026 00:53:00 +0000 (0:00:00.159) 0:00:12.897 ********* 2026-03-03 00:53:06.083365 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:53:06.083371 | orchestrator | 2026-03-03 00:53:06.083377 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-03 00:53:06.083383 | orchestrator | Tuesday 03 March 2026 00:53:00 +0000 (0:00:00.135) 0:00:13.032 ********* 2026-03-03 00:53:06.083389 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:53:06.083395 | orchestrator | 2026-03-03 00:53:06.083401 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-03 00:53:06.083407 | orchestrator | Tuesday 03 March 2026 00:53:00 +0000 (0:00:00.153) 0:00:13.186 ********* 2026-03-03 00:53:06.083413 | orchestrator | changed: [testbed-manager] 2026-03-03 00:53:06.083419 | orchestrator | 2026-03-03 00:53:06.083425 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-03 00:53:06.083431 | orchestrator | Tuesday 03 March 2026 00:53:01 +0000 (0:00:01.045) 0:00:14.231 ********* 2026-03-03 00:53:06.083436 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-03 00:53:06.083442 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-03 00:53:06.083449 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-03 00:53:06.083455 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-03 00:53:06.083461 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-03 00:53:06.083467 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-03 00:53:06.083473 | orchestrator | 2026-03-03 00:53:06.083479 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-03 00:53:06.083485 | orchestrator | Tuesday 03 March 2026 00:53:03 +0000 (0:00:02.054) 0:00:16.285 ********* 2026-03-03 00:53:06.083491 | orchestrator | ok: [testbed-manager] 2026-03-03 00:53:06.083497 | orchestrator | 2026-03-03 00:53:06.083503 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-03 00:53:06.083509 | orchestrator | Tuesday 03 March 2026 00:53:04 +0000 (0:00:01.162) 0:00:17.447 ********* 2026-03-03 00:53:06.083515 | orchestrator | changed: [testbed-manager] 2026-03-03 00:53:06.083521 | orchestrator | 2026-03-03 00:53:06.083527 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:53:06.083538 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 00:53:06.083545 | orchestrator | 2026-03-03 00:53:06.083552 | orchestrator | 2026-03-03 00:53:06.083572 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:53:06.083580 | orchestrator | Tuesday 03 March 2026 00:53:05 +0000 (0:00:01.290) 0:00:18.738 ********* 2026-03-03 00:53:06.083587 | orchestrator | =============================================================================== 2026-03-03 00:53:06.083594 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.09s 2026-03-03 00:53:06.083601 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.05s 2026-03-03 00:53:06.083609 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.29s 2026-03-03 00:53:06.083616 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.16s 2026-03-03 00:53:06.083624 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.07s 2026-03-03 00:53:06.083631 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.06s 2026-03-03 00:53:06.083637 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.05s 2026-03-03 00:53:06.083644 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.94s 2026-03-03 00:53:06.083652 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2026-03-03 00:53:06.083658 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-03-03 00:53:06.083665 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-03-03 00:53:06.083672 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-03 00:53:06.083679 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-03 00:53:06.083686 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.14s 2026-03-03 00:53:06.083693 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-03 00:53:06.310481 | orchestrator | 2026-03-03 00:53:06.313536 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 3 00:53:06 UTC 2026 2026-03-03 00:53:06.313617 | orchestrator | 2026-03-03 00:53:08.118244 | orchestrator | 2026-03-03 00:53:08 | INFO  | Collection nutshell is prepared for execution 2026-03-03 00:53:08.118332 | orchestrator | 2026-03-03 00:53:08 | INFO  | A [0] - dotfiles 2026-03-03 00:53:18.146171 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - homer 2026-03-03 00:53:18.146259 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - netdata 2026-03-03 00:53:18.146268 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - openstackclient 2026-03-03 00:53:18.146275 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - phpmyadmin 2026-03-03 00:53:18.146281 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - common 2026-03-03 00:53:18.148734 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- loadbalancer 2026-03-03 00:53:18.148819 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [2] --- opensearch 2026-03-03 00:53:18.148827 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [2] --- mariadb-ng 2026-03-03 00:53:18.149076 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [3] ---- horizon 2026-03-03 00:53:18.149403 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [3] ---- keystone 2026-03-03 00:53:18.149422 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- neutron 2026-03-03 00:53:18.149432 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [5] ------ wait-for-nova 2026-03-03 00:53:18.149614 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [6] ------- octavia 2026-03-03 00:53:18.151128 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- barbican 2026-03-03 00:53:18.151292 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- designate 2026-03-03 00:53:18.151311 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- ironic 2026-03-03 00:53:18.151320 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- placement 2026-03-03 00:53:18.151409 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- magnum 2026-03-03 00:53:18.152614 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- openvswitch 2026-03-03 00:53:18.152692 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [2] --- ovn 2026-03-03 00:53:18.152720 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- memcached 2026-03-03 00:53:18.153071 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- redis 2026-03-03 00:53:18.153100 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- rabbitmq-ng 2026-03-03 00:53:18.153223 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - kubernetes 2026-03-03 00:53:18.155613 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- kubeconfig 2026-03-03 00:53:18.155681 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- copy-kubeconfig 2026-03-03 00:53:18.155715 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [0] - ceph 2026-03-03 00:53:18.158134 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [1] -- ceph-pools 2026-03-03 00:53:18.158248 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [2] --- copy-ceph-keys 2026-03-03 00:53:18.158260 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [3] ---- cephclient 2026-03-03 00:53:18.158277 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-03 00:53:18.158305 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- wait-for-keystone 2026-03-03 00:53:18.158492 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-03 00:53:18.158942 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [5] ------ glance 2026-03-03 00:53:18.158986 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [5] ------ cinder 2026-03-03 00:53:18.159000 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [5] ------ nova 2026-03-03 00:53:18.159221 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [4] ----- prometheus 2026-03-03 00:53:18.159315 | orchestrator | 2026-03-03 00:53:18 | INFO  | A [5] ------ grafana 2026-03-03 00:53:18.340748 | orchestrator | 2026-03-03 00:53:18 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-03 00:53:18.340869 | orchestrator | 2026-03-03 00:53:18 | INFO  | Tasks are running in the background 2026-03-03 00:53:20.981127 | orchestrator | 2026-03-03 00:53:20 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-03 00:53:23.111639 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:23.112082 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:23.112842 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:23.113370 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:23.114160 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:23.116530 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:23.117115 | orchestrator | 2026-03-03 00:53:23 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:23.119258 | orchestrator | 2026-03-03 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:26.150893 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:26.151119 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:26.152024 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:26.152554 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:26.153440 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:26.153847 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:26.154420 | orchestrator | 2026-03-03 00:53:26 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:26.154459 | orchestrator | 2026-03-03 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:29.247185 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:29.247275 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:29.248197 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:29.248543 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:29.249197 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:29.249631 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:29.251574 | orchestrator | 2026-03-03 00:53:29 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:29.254155 | orchestrator | 2026-03-03 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:32.325095 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:32.325184 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:32.325195 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:32.325222 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:32.327840 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:32.330911 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:32.333177 | orchestrator | 2026-03-03 00:53:32 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:32.333229 | orchestrator | 2026-03-03 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:35.365249 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:35.365337 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:35.365346 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:35.365354 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:35.366115 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:35.366908 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:35.367506 | orchestrator | 2026-03-03 00:53:35 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:35.367538 | orchestrator | 2026-03-03 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:38.415545 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:38.420426 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:38.430302 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:38.430387 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:38.430396 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:38.430403 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:38.430409 | orchestrator | 2026-03-03 00:53:38 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:38.432076 | orchestrator | 2026-03-03 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:41.465816 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:41.465884 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:41.469723 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:41.470997 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:41.473353 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:41.475744 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:41.477351 | orchestrator | 2026-03-03 00:53:41 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state STARTED 2026-03-03 00:53:41.477433 | orchestrator | 2026-03-03 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:44.607762 | orchestrator | 2026-03-03 00:53:44.607790 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-03 00:53:44.607795 | orchestrator | 2026-03-03 00:53:44.607799 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-03 00:53:44.607803 | orchestrator | Tuesday 03 March 2026 00:53:30 +0000 (0:00:00.261) 0:00:00.261 ********* 2026-03-03 00:53:44.607808 | orchestrator | changed: [testbed-manager] 2026-03-03 00:53:44.607812 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:53:44.607816 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:53:44.607820 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:53:44.607824 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:53:44.607828 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:53:44.607831 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:53:44.607835 | orchestrator | 2026-03-03 00:53:44.607839 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-03 00:53:44.607843 | orchestrator | Tuesday 03 March 2026 00:53:33 +0000 (0:00:03.518) 0:00:03.780 ********* 2026-03-03 00:53:44.607856 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-03 00:53:44.607861 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-03 00:53:44.607867 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-03 00:53:44.607871 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-03 00:53:44.607874 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-03 00:53:44.607878 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-03 00:53:44.607882 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-03 00:53:44.607886 | orchestrator | 2026-03-03 00:53:44.607890 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-03 00:53:44.607894 | orchestrator | Tuesday 03 March 2026 00:53:35 +0000 (0:00:01.926) 0:00:05.706 ********* 2026-03-03 00:53:44.607900 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:34.446381', 'end': '2026-03-03 00:53:34.454125', 'delta': '0:00:00.007744', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607906 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:34.575541', 'end': '2026-03-03 00:53:34.581528', 'delta': '0:00:00.005987', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607911 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:34.571033', 'end': '2026-03-03 00:53:34.580956', 'delta': '0:00:00.009923', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607925 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:34.649369', 'end': '2026-03-03 00:53:34.656379', 'delta': '0:00:00.007010', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607937 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:34.816563', 'end': '2026-03-03 00:53:34.824168', 'delta': '0:00:00.007605', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607941 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:35.267389', 'end': '2026-03-03 00:53:35.272846', 'delta': '0:00:00.005457', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607945 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-03 00:53:35.652018', 'end': '2026-03-03 00:53:35.657928', 'delta': '0:00:00.005910', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-03 00:53:44.607949 | orchestrator | 2026-03-03 00:53:44.607953 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-03 00:53:44.607957 | orchestrator | Tuesday 03 March 2026 00:53:38 +0000 (0:00:02.552) 0:00:08.259 ********* 2026-03-03 00:53:44.607961 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-03 00:53:44.607965 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-03 00:53:44.607969 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-03 00:53:44.607973 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-03 00:53:44.607976 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-03 00:53:44.607980 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-03 00:53:44.607984 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-03 00:53:44.607988 | orchestrator | 2026-03-03 00:53:44.607991 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-03 00:53:44.607995 | orchestrator | Tuesday 03 March 2026 00:53:40 +0000 (0:00:01.634) 0:00:09.893 ********* 2026-03-03 00:53:44.607999 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-03 00:53:44.608003 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-03 00:53:44.608007 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-03 00:53:44.608013 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-03 00:53:44.608016 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-03 00:53:44.608020 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-03 00:53:44.608024 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-03 00:53:44.608028 | orchestrator | 2026-03-03 00:53:44.608032 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:53:44.608038 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608043 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608047 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608051 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608055 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608059 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608063 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:53:44.608067 | orchestrator | 2026-03-03 00:53:44.608071 | orchestrator | 2026-03-03 00:53:44.608074 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:53:44.608078 | orchestrator | Tuesday 03 March 2026 00:53:42 +0000 (0:00:02.400) 0:00:12.294 ********* 2026-03-03 00:53:44.608082 | orchestrator | =============================================================================== 2026-03-03 00:53:44.608086 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.52s 2026-03-03 00:53:44.608090 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.55s 2026-03-03 00:53:44.608093 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.40s 2026-03-03 00:53:44.608097 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.93s 2026-03-03 00:53:44.608101 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.63s 2026-03-03 00:53:44.608105 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:44.608109 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:44.608112 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:44.608116 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:53:44.608120 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:44.608124 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:44.608253 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:44.608259 | orchestrator | 2026-03-03 00:53:44 | INFO  | Task 1210806b-1fc6-4f5b-8b35-b8e8c6df6c02 is in state SUCCESS 2026-03-03 00:53:44.608263 | orchestrator | 2026-03-03 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:47.703747 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:47.703820 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:47.705169 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:47.706295 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:53:47.707257 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:47.710845 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:47.713343 | orchestrator | 2026-03-03 00:53:47 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:47.714144 | orchestrator | 2026-03-03 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:50.794827 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:50.796816 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:50.797174 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:50.797716 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:53:50.799634 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:50.801708 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:50.802884 | orchestrator | 2026-03-03 00:53:50 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:50.802913 | orchestrator | 2026-03-03 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:53.847096 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:53.849094 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:53.849524 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:53.850422 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:53:53.851948 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:53.852892 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:53.853644 | orchestrator | 2026-03-03 00:53:53 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:53.854236 | orchestrator | 2026-03-03 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:53:56.899315 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:53:56.899409 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:53:56.899417 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:53:56.899422 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:53:56.899428 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:53:56.899455 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:53:56.899461 | orchestrator | 2026-03-03 00:53:56 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:53:56.899466 | orchestrator | 2026-03-03 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:00.006768 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:54:00.006878 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:00.006896 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:00.006903 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:00.006909 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:00.006915 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:00.006921 | orchestrator | 2026-03-03 00:53:59 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:54:00.006929 | orchestrator | 2026-03-03 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:03.160034 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state STARTED 2026-03-03 00:54:03.160092 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:03.160101 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:03.160109 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:03.160116 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:03.160123 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:03.160130 | orchestrator | 2026-03-03 00:54:03 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:54:03.160137 | orchestrator | 2026-03-03 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:06.206249 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task f7ba1c78-7292-4dea-b992-ac32192afa42 is in state SUCCESS 2026-03-03 00:54:06.206312 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:06.206321 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:06.206327 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:06.206334 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:06.206341 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:06.206347 | orchestrator | 2026-03-03 00:54:06 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:54:06.206353 | orchestrator | 2026-03-03 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:09.313124 | orchestrator | 2026-03-03 00:54:09 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:09.313191 | orchestrator | 2026-03-03 00:54:09 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:09.313197 | orchestrator | 2026-03-03 00:54:09 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:09.313201 | orchestrator | 2026-03-03 00:54:09 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:09.313205 | orchestrator | 2026-03-03 00:54:09 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:09.313209 | orchestrator | 2026-03-03 00:54:09 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:54:09.313213 | orchestrator | 2026-03-03 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:12.279608 | orchestrator | 2026-03-03 00:54:12 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:12.281334 | orchestrator | 2026-03-03 00:54:12 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:12.292444 | orchestrator | 2026-03-03 00:54:12 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:12.294711 | orchestrator | 2026-03-03 00:54:12 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:12.299421 | orchestrator | 2026-03-03 00:54:12 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:12.301201 | orchestrator | 2026-03-03 00:54:12 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:54:12.301528 | orchestrator | 2026-03-03 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:15.347880 | orchestrator | 2026-03-03 00:54:15 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:15.359546 | orchestrator | 2026-03-03 00:54:15 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:15.359664 | orchestrator | 2026-03-03 00:54:15 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:15.359678 | orchestrator | 2026-03-03 00:54:15 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:15.359692 | orchestrator | 2026-03-03 00:54:15 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:15.359715 | orchestrator | 2026-03-03 00:54:15 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state STARTED 2026-03-03 00:54:15.359723 | orchestrator | 2026-03-03 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:18.501180 | orchestrator | 2026-03-03 00:54:18 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:18.504215 | orchestrator | 2026-03-03 00:54:18 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:18.506747 | orchestrator | 2026-03-03 00:54:18 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:18.508564 | orchestrator | 2026-03-03 00:54:18 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:18.509236 | orchestrator | 2026-03-03 00:54:18 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:18.509663 | orchestrator | 2026-03-03 00:54:18 | INFO  | Task 2ec59945-1f75-4af6-a6a8-cc405dfe4665 is in state SUCCESS 2026-03-03 00:54:18.509749 | orchestrator | 2026-03-03 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:21.542858 | orchestrator | 2026-03-03 00:54:21 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:21.543461 | orchestrator | 2026-03-03 00:54:21 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:21.544556 | orchestrator | 2026-03-03 00:54:21 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:21.545289 | orchestrator | 2026-03-03 00:54:21 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:21.546519 | orchestrator | 2026-03-03 00:54:21 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:21.546555 | orchestrator | 2026-03-03 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:24.586218 | orchestrator | 2026-03-03 00:54:24 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:24.586354 | orchestrator | 2026-03-03 00:54:24 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:24.586373 | orchestrator | 2026-03-03 00:54:24 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:24.586384 | orchestrator | 2026-03-03 00:54:24 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:24.588181 | orchestrator | 2026-03-03 00:54:24 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:24.588247 | orchestrator | 2026-03-03 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:27.658167 | orchestrator | 2026-03-03 00:54:27 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:27.658241 | orchestrator | 2026-03-03 00:54:27 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:27.661615 | orchestrator | 2026-03-03 00:54:27 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:27.671951 | orchestrator | 2026-03-03 00:54:27 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:27.676516 | orchestrator | 2026-03-03 00:54:27 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:27.676599 | orchestrator | 2026-03-03 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:30.724247 | orchestrator | 2026-03-03 00:54:30 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:30.725216 | orchestrator | 2026-03-03 00:54:30 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:30.726637 | orchestrator | 2026-03-03 00:54:30 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:30.731125 | orchestrator | 2026-03-03 00:54:30 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:30.733480 | orchestrator | 2026-03-03 00:54:30 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:30.735418 | orchestrator | 2026-03-03 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:33.778164 | orchestrator | 2026-03-03 00:54:33 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:33.781631 | orchestrator | 2026-03-03 00:54:33 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:33.784226 | orchestrator | 2026-03-03 00:54:33 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:33.787921 | orchestrator | 2026-03-03 00:54:33 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:33.793737 | orchestrator | 2026-03-03 00:54:33 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:33.793818 | orchestrator | 2026-03-03 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:36.856938 | orchestrator | 2026-03-03 00:54:36 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:36.857953 | orchestrator | 2026-03-03 00:54:36 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:36.859040 | orchestrator | 2026-03-03 00:54:36 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:36.860650 | orchestrator | 2026-03-03 00:54:36 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:36.863902 | orchestrator | 2026-03-03 00:54:36 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:36.863954 | orchestrator | 2026-03-03 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:39.933365 | orchestrator | 2026-03-03 00:54:39 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:39.933733 | orchestrator | 2026-03-03 00:54:39 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:39.934544 | orchestrator | 2026-03-03 00:54:39 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:39.935297 | orchestrator | 2026-03-03 00:54:39 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:39.936247 | orchestrator | 2026-03-03 00:54:39 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:39.936420 | orchestrator | 2026-03-03 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:42.965674 | orchestrator | 2026-03-03 00:54:42 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:42.966872 | orchestrator | 2026-03-03 00:54:42 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:42.967040 | orchestrator | 2026-03-03 00:54:42 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:42.968788 | orchestrator | 2026-03-03 00:54:42 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:42.971144 | orchestrator | 2026-03-03 00:54:42 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:42.971179 | orchestrator | 2026-03-03 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:46.030180 | orchestrator | 2026-03-03 00:54:46 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:46.030301 | orchestrator | 2026-03-03 00:54:46 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:46.030319 | orchestrator | 2026-03-03 00:54:46 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state STARTED 2026-03-03 00:54:46.031235 | orchestrator | 2026-03-03 00:54:46 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:46.032218 | orchestrator | 2026-03-03 00:54:46 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:46.032296 | orchestrator | 2026-03-03 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:49.108151 | orchestrator | 2026-03-03 00:54:49 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:49.108207 | orchestrator | 2026-03-03 00:54:49 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:49.111765 | orchestrator | 2026-03-03 00:54:49.111830 | orchestrator | 2026-03-03 00:54:49.111838 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-03 00:54:49.111861 | orchestrator | 2026-03-03 00:54:49.111868 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-03 00:54:49.111874 | orchestrator | Tuesday 03 March 2026 00:53:29 +0000 (0:00:00.596) 0:00:00.596 ********* 2026-03-03 00:54:49.111878 | orchestrator | ok: [testbed-manager] => { 2026-03-03 00:54:49.111883 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-03 00:54:49.111887 | orchestrator | } 2026-03-03 00:54:49.111891 | orchestrator | 2026-03-03 00:54:49.111894 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-03 00:54:49.111897 | orchestrator | Tuesday 03 March 2026 00:53:29 +0000 (0:00:00.396) 0:00:00.992 ********* 2026-03-03 00:54:49.111901 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.111904 | orchestrator | 2026-03-03 00:54:49.111916 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-03 00:54:49.111921 | orchestrator | Tuesday 03 March 2026 00:53:30 +0000 (0:00:01.087) 0:00:02.080 ********* 2026-03-03 00:54:49.111928 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-03 00:54:49.111935 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-03 00:54:49.111940 | orchestrator | 2026-03-03 00:54:49.111945 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-03 00:54:49.111949 | orchestrator | Tuesday 03 March 2026 00:53:32 +0000 (0:00:01.593) 0:00:03.674 ********* 2026-03-03 00:54:49.111954 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.111959 | orchestrator | 2026-03-03 00:54:49.111964 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-03 00:54:49.111969 | orchestrator | Tuesday 03 March 2026 00:53:34 +0000 (0:00:01.804) 0:00:05.478 ********* 2026-03-03 00:54:49.111974 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.111979 | orchestrator | 2026-03-03 00:54:49.111984 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-03 00:54:49.111989 | orchestrator | Tuesday 03 March 2026 00:53:35 +0000 (0:00:01.165) 0:00:06.644 ********* 2026-03-03 00:54:49.111994 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-03 00:54:49.111999 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.112004 | orchestrator | 2026-03-03 00:54:49.112009 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-03 00:54:49.112015 | orchestrator | Tuesday 03 March 2026 00:53:59 +0000 (0:00:24.115) 0:00:30.759 ********* 2026-03-03 00:54:49.112020 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112025 | orchestrator | 2026-03-03 00:54:49.112031 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:54:49.112036 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:54:49.112043 | orchestrator | 2026-03-03 00:54:49.112049 | orchestrator | 2026-03-03 00:54:49.112056 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:54:49.112061 | orchestrator | Tuesday 03 March 2026 00:54:03 +0000 (0:00:03.730) 0:00:34.490 ********* 2026-03-03 00:54:49.112066 | orchestrator | =============================================================================== 2026-03-03 00:54:49.112071 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.12s 2026-03-03 00:54:49.112075 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.73s 2026-03-03 00:54:49.112080 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.80s 2026-03-03 00:54:49.112085 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.59s 2026-03-03 00:54:49.112090 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.17s 2026-03-03 00:54:49.112095 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.09s 2026-03-03 00:54:49.112100 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.40s 2026-03-03 00:54:49.112112 | orchestrator | 2026-03-03 00:54:49.112116 | orchestrator | 2026-03-03 00:54:49.112121 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-03 00:54:49.112128 | orchestrator | 2026-03-03 00:54:49.112135 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-03 00:54:49.112140 | orchestrator | Tuesday 03 March 2026 00:53:31 +0000 (0:00:00.963) 0:00:00.963 ********* 2026-03-03 00:54:49.112145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-03 00:54:49.112152 | orchestrator | 2026-03-03 00:54:49.112159 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-03 00:54:49.112166 | orchestrator | Tuesday 03 March 2026 00:53:31 +0000 (0:00:00.372) 0:00:01.335 ********* 2026-03-03 00:54:49.112171 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-03 00:54:49.112176 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-03 00:54:49.112181 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-03 00:54:49.112186 | orchestrator | 2026-03-03 00:54:49.112192 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-03 00:54:49.112197 | orchestrator | Tuesday 03 March 2026 00:53:34 +0000 (0:00:02.965) 0:00:04.301 ********* 2026-03-03 00:54:49.112202 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112207 | orchestrator | 2026-03-03 00:54:49.112211 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-03 00:54:49.112215 | orchestrator | Tuesday 03 March 2026 00:53:37 +0000 (0:00:02.681) 0:00:06.984 ********* 2026-03-03 00:54:49.112233 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-03 00:54:49.112242 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.112247 | orchestrator | 2026-03-03 00:54:49.112251 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-03 00:54:49.112256 | orchestrator | Tuesday 03 March 2026 00:54:09 +0000 (0:00:32.054) 0:00:39.038 ********* 2026-03-03 00:54:49.112261 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112265 | orchestrator | 2026-03-03 00:54:49.112272 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-03 00:54:49.112279 | orchestrator | Tuesday 03 March 2026 00:54:10 +0000 (0:00:01.049) 0:00:40.088 ********* 2026-03-03 00:54:49.112285 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.112290 | orchestrator | 2026-03-03 00:54:49.112295 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-03 00:54:49.112300 | orchestrator | Tuesday 03 March 2026 00:54:11 +0000 (0:00:00.865) 0:00:40.953 ********* 2026-03-03 00:54:49.112305 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112310 | orchestrator | 2026-03-03 00:54:49.112316 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-03 00:54:49.112321 | orchestrator | Tuesday 03 March 2026 00:54:13 +0000 (0:00:02.581) 0:00:43.535 ********* 2026-03-03 00:54:49.112326 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112331 | orchestrator | 2026-03-03 00:54:49.112337 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-03 00:54:49.112342 | orchestrator | Tuesday 03 March 2026 00:54:16 +0000 (0:00:02.173) 0:00:45.708 ********* 2026-03-03 00:54:49.112348 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112351 | orchestrator | 2026-03-03 00:54:49.112355 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-03 00:54:49.112359 | orchestrator | Tuesday 03 March 2026 00:54:16 +0000 (0:00:00.641) 0:00:46.350 ********* 2026-03-03 00:54:49.112365 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.112370 | orchestrator | 2026-03-03 00:54:49.112375 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:54:49.112381 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:54:49.112391 | orchestrator | 2026-03-03 00:54:49.112397 | orchestrator | 2026-03-03 00:54:49.112402 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:54:49.112408 | orchestrator | Tuesday 03 March 2026 00:54:17 +0000 (0:00:00.321) 0:00:46.672 ********* 2026-03-03 00:54:49.112413 | orchestrator | =============================================================================== 2026-03-03 00:54:49.112418 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.05s 2026-03-03 00:54:49.112423 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.97s 2026-03-03 00:54:49.112428 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.68s 2026-03-03 00:54:49.112434 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.58s 2026-03-03 00:54:49.112439 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.17s 2026-03-03 00:54:49.112444 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.05s 2026-03-03 00:54:49.112449 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.87s 2026-03-03 00:54:49.112455 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.64s 2026-03-03 00:54:49.112460 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2026-03-03 00:54:49.112465 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2026-03-03 00:54:49.112471 | orchestrator | 2026-03-03 00:54:49.112476 | orchestrator | 2026-03-03 00:54:49.112481 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-03 00:54:49.112487 | orchestrator | 2026-03-03 00:54:49.112492 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-03 00:54:49.112525 | orchestrator | Tuesday 03 March 2026 00:53:46 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-03-03 00:54:49.112547 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.112552 | orchestrator | 2026-03-03 00:54:49.112557 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-03 00:54:49.112562 | orchestrator | Tuesday 03 March 2026 00:53:48 +0000 (0:00:01.246) 0:00:01.435 ********* 2026-03-03 00:54:49.112567 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-03 00:54:49.112572 | orchestrator | 2026-03-03 00:54:49.112577 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-03 00:54:49.112581 | orchestrator | Tuesday 03 March 2026 00:53:49 +0000 (0:00:01.151) 0:00:02.586 ********* 2026-03-03 00:54:49.112586 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112591 | orchestrator | 2026-03-03 00:54:49.112596 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-03 00:54:49.112601 | orchestrator | Tuesday 03 March 2026 00:53:51 +0000 (0:00:01.748) 0:00:04.335 ********* 2026-03-03 00:54:49.112607 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-03 00:54:49.112613 | orchestrator | ok: [testbed-manager] 2026-03-03 00:54:49.112618 | orchestrator | 2026-03-03 00:54:49.112623 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-03 00:54:49.112628 | orchestrator | Tuesday 03 March 2026 00:54:43 +0000 (0:00:52.364) 0:00:56.699 ********* 2026-03-03 00:54:49.112633 | orchestrator | changed: [testbed-manager] 2026-03-03 00:54:49.112663 | orchestrator | 2026-03-03 00:54:49.112669 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:54:49.112675 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:54:49.112680 | orchestrator | 2026-03-03 00:54:49.112685 | orchestrator | 2026-03-03 00:54:49.112691 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:54:49.112702 | orchestrator | Tuesday 03 March 2026 00:54:46 +0000 (0:00:03.412) 0:01:00.112 ********* 2026-03-03 00:54:49.112713 | orchestrator | =============================================================================== 2026-03-03 00:54:49.112718 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.36s 2026-03-03 00:54:49.112724 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.41s 2026-03-03 00:54:49.112729 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.75s 2026-03-03 00:54:49.112734 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.25s 2026-03-03 00:54:49.112740 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.15s 2026-03-03 00:54:49.112745 | orchestrator | 2026-03-03 00:54:49 | INFO  | Task 67a208e0-32e6-4fb1-972b-82daa2a1c632 is in state SUCCESS 2026-03-03 00:54:49.112754 | orchestrator | 2026-03-03 00:54:49 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:49.112759 | orchestrator | 2026-03-03 00:54:49 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:49.112765 | orchestrator | 2026-03-03 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:52.162960 | orchestrator | 2026-03-03 00:54:52 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:52.165645 | orchestrator | 2026-03-03 00:54:52 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:52.167822 | orchestrator | 2026-03-03 00:54:52 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:52.171630 | orchestrator | 2026-03-03 00:54:52 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:52.171691 | orchestrator | 2026-03-03 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:55.215142 | orchestrator | 2026-03-03 00:54:55 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:55.215840 | orchestrator | 2026-03-03 00:54:55 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:55.217324 | orchestrator | 2026-03-03 00:54:55 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:55.219783 | orchestrator | 2026-03-03 00:54:55 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:55.219831 | orchestrator | 2026-03-03 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:54:58.262383 | orchestrator | 2026-03-03 00:54:58 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:54:58.262992 | orchestrator | 2026-03-03 00:54:58 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:54:58.264106 | orchestrator | 2026-03-03 00:54:58 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:54:58.266145 | orchestrator | 2026-03-03 00:54:58 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:54:58.266185 | orchestrator | 2026-03-03 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:01.321863 | orchestrator | 2026-03-03 00:55:01 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:01.324892 | orchestrator | 2026-03-03 00:55:01 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:01.329090 | orchestrator | 2026-03-03 00:55:01 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:01.332666 | orchestrator | 2026-03-03 00:55:01 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state STARTED 2026-03-03 00:55:01.332720 | orchestrator | 2026-03-03 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:04.371470 | orchestrator | 2026-03-03 00:55:04 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:04.374406 | orchestrator | 2026-03-03 00:55:04 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:04.376122 | orchestrator | 2026-03-03 00:55:04 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:04.377372 | orchestrator | 2026-03-03 00:55:04.377427 | orchestrator | 2026-03-03 00:55:04.377435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:55:04.377442 | orchestrator | 2026-03-03 00:55:04.377448 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 00:55:04.377454 | orchestrator | Tuesday 03 March 2026 00:53:31 +0000 (0:00:00.882) 0:00:00.882 ********* 2026-03-03 00:55:04.377460 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-03 00:55:04.377467 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-03 00:55:04.377472 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-03 00:55:04.377478 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-03 00:55:04.377484 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-03 00:55:04.377489 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-03 00:55:04.377560 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-03 00:55:04.377566 | orchestrator | 2026-03-03 00:55:04.377571 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-03 00:55:04.377577 | orchestrator | 2026-03-03 00:55:04.377582 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-03 00:55:04.377588 | orchestrator | Tuesday 03 March 2026 00:53:32 +0000 (0:00:01.760) 0:00:02.642 ********* 2026-03-03 00:55:04.377613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:55:04.377622 | orchestrator | 2026-03-03 00:55:04.377627 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-03 00:55:04.377633 | orchestrator | Tuesday 03 March 2026 00:53:34 +0000 (0:00:01.554) 0:00:04.197 ********* 2026-03-03 00:55:04.377639 | orchestrator | ok: [testbed-manager] 2026-03-03 00:55:04.377647 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:04.377653 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:04.377658 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:55:04.377664 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:04.377669 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:55:04.377675 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:55:04.377680 | orchestrator | 2026-03-03 00:55:04.377686 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-03 00:55:04.377691 | orchestrator | Tuesday 03 March 2026 00:53:36 +0000 (0:00:02.124) 0:00:06.321 ********* 2026-03-03 00:55:04.377697 | orchestrator | ok: [testbed-manager] 2026-03-03 00:55:04.377703 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:04.377708 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:04.377714 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:55:04.377719 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:04.377724 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:55:04.377728 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:55:04.377733 | orchestrator | 2026-03-03 00:55:04.377739 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-03 00:55:04.377744 | orchestrator | Tuesday 03 March 2026 00:53:40 +0000 (0:00:03.765) 0:00:10.087 ********* 2026-03-03 00:55:04.377750 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:04.377755 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:04.377761 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:04.377766 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:04.377784 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:04.377790 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:04.377796 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:04.377801 | orchestrator | 2026-03-03 00:55:04.377807 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-03 00:55:04.377812 | orchestrator | Tuesday 03 March 2026 00:53:42 +0000 (0:00:02.360) 0:00:12.447 ********* 2026-03-03 00:55:04.377818 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:04.377824 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:04.377829 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:04.377835 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:04.377841 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:04.377846 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:04.377852 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:04.377858 | orchestrator | 2026-03-03 00:55:04.377863 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-03 00:55:04.377869 | orchestrator | Tuesday 03 March 2026 00:53:56 +0000 (0:00:14.158) 0:00:26.606 ********* 2026-03-03 00:55:04.377875 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:04.377880 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:04.377886 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:04.377891 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:04.377897 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:04.377902 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:04.377908 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:04.377913 | orchestrator | 2026-03-03 00:55:04.377919 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-03 00:55:04.377925 | orchestrator | Tuesday 03 March 2026 00:54:34 +0000 (0:00:37.887) 0:01:04.495 ********* 2026-03-03 00:55:04.377932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:55:04.377939 | orchestrator | 2026-03-03 00:55:04.377945 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-03 00:55:04.377950 | orchestrator | Tuesday 03 March 2026 00:54:36 +0000 (0:00:01.661) 0:01:06.156 ********* 2026-03-03 00:55:04.377956 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-03 00:55:04.377962 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-03 00:55:04.377968 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-03 00:55:04.377974 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-03 00:55:04.377993 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-03 00:55:04.377999 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-03 00:55:04.378005 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-03 00:55:04.378011 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-03 00:55:04.378056 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-03 00:55:04.378063 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-03 00:55:04.378069 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-03 00:55:04.378075 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-03 00:55:04.378081 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-03 00:55:04.378087 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-03 00:55:04.378093 | orchestrator | 2026-03-03 00:55:04.378099 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-03 00:55:04.378106 | orchestrator | Tuesday 03 March 2026 00:54:41 +0000 (0:00:04.683) 0:01:10.839 ********* 2026-03-03 00:55:04.378113 | orchestrator | ok: [testbed-manager] 2026-03-03 00:55:04.378119 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:04.378126 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:04.378132 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:04.378143 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:55:04.378149 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:55:04.378155 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:55:04.378162 | orchestrator | 2026-03-03 00:55:04.378168 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-03 00:55:04.378174 | orchestrator | Tuesday 03 March 2026 00:54:42 +0000 (0:00:00.987) 0:01:11.826 ********* 2026-03-03 00:55:04.378185 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:04.378191 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:04.378197 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:04.378203 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:04.378210 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:04.378216 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:04.378222 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:04.378229 | orchestrator | 2026-03-03 00:55:04.378235 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-03 00:55:04.378241 | orchestrator | Tuesday 03 March 2026 00:54:43 +0000 (0:00:01.732) 0:01:13.559 ********* 2026-03-03 00:55:04.378248 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:04.378254 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:04.378260 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:04.378266 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:55:04.378273 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:55:04.378279 | orchestrator | ok: [testbed-manager] 2026-03-03 00:55:04.378285 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:55:04.378291 | orchestrator | 2026-03-03 00:55:04.378297 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-03 00:55:04.378303 | orchestrator | Tuesday 03 March 2026 00:54:45 +0000 (0:00:01.477) 0:01:15.037 ********* 2026-03-03 00:55:04.378309 | orchestrator | ok: [testbed-manager] 2026-03-03 00:55:04.378315 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:04.378320 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:04.378326 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:04.378332 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:55:04.378337 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:55:04.378343 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:55:04.378348 | orchestrator | 2026-03-03 00:55:04.378354 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-03 00:55:04.378360 | orchestrator | Tuesday 03 March 2026 00:54:47 +0000 (0:00:01.938) 0:01:16.975 ********* 2026-03-03 00:55:04.378366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-03 00:55:04.378374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:55:04.378381 | orchestrator | 2026-03-03 00:55:04.378386 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-03 00:55:04.378392 | orchestrator | Tuesday 03 March 2026 00:54:48 +0000 (0:00:01.287) 0:01:18.262 ********* 2026-03-03 00:55:04.378397 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:04.378403 | orchestrator | 2026-03-03 00:55:04.378408 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-03 00:55:04.378414 | orchestrator | Tuesday 03 March 2026 00:54:50 +0000 (0:00:01.806) 0:01:20.069 ********* 2026-03-03 00:55:04.378419 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:04.378425 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:04.378431 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:04.378436 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:04.378442 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:04.378447 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:04.378453 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:04.378459 | orchestrator | 2026-03-03 00:55:04.378465 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:55:04.378475 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378482 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378488 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378509 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378536 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378542 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378548 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:04.378554 | orchestrator | 2026-03-03 00:55:04.378561 | orchestrator | 2026-03-03 00:55:04.378566 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:55:04.378573 | orchestrator | Tuesday 03 March 2026 00:55:01 +0000 (0:00:11.169) 0:01:31.239 ********* 2026-03-03 00:55:04.378579 | orchestrator | =============================================================================== 2026-03-03 00:55:04.378585 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.89s 2026-03-03 00:55:04.378592 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.16s 2026-03-03 00:55:04.378598 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.17s 2026-03-03 00:55:04.378603 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.68s 2026-03-03 00:55:04.378609 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.77s 2026-03-03 00:55:04.378614 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.36s 2026-03-03 00:55:04.378620 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.12s 2026-03-03 00:55:04.378630 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.94s 2026-03-03 00:55:04.378636 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.81s 2026-03-03 00:55:04.378642 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.75s 2026-03-03 00:55:04.378647 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.73s 2026-03-03 00:55:04.378653 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.66s 2026-03-03 00:55:04.378659 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.55s 2026-03-03 00:55:04.378664 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.48s 2026-03-03 00:55:04.378670 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.29s 2026-03-03 00:55:04.378676 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.99s 2026-03-03 00:55:04.378682 | orchestrator | 2026-03-03 00:55:04 | INFO  | Task 437aab7d-935d-4679-9db8-e9ce18f95b6c is in state SUCCESS 2026-03-03 00:55:04.378688 | orchestrator | 2026-03-03 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:07.416074 | orchestrator | 2026-03-03 00:55:07 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:07.416250 | orchestrator | 2026-03-03 00:55:07 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:07.416859 | orchestrator | 2026-03-03 00:55:07 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:07.416907 | orchestrator | 2026-03-03 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:10.461609 | orchestrator | 2026-03-03 00:55:10 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:10.461679 | orchestrator | 2026-03-03 00:55:10 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:10.461687 | orchestrator | 2026-03-03 00:55:10 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:10.461694 | orchestrator | 2026-03-03 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:13.502198 | orchestrator | 2026-03-03 00:55:13 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:13.503207 | orchestrator | 2026-03-03 00:55:13 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:13.504145 | orchestrator | 2026-03-03 00:55:13 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:13.504178 | orchestrator | 2026-03-03 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:16.547675 | orchestrator | 2026-03-03 00:55:16 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:16.549924 | orchestrator | 2026-03-03 00:55:16 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:16.550265 | orchestrator | 2026-03-03 00:55:16 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:16.550296 | orchestrator | 2026-03-03 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:19.594792 | orchestrator | 2026-03-03 00:55:19 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:19.596445 | orchestrator | 2026-03-03 00:55:19 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:19.597755 | orchestrator | 2026-03-03 00:55:19 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:19.597792 | orchestrator | 2026-03-03 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:22.635013 | orchestrator | 2026-03-03 00:55:22 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:22.635135 | orchestrator | 2026-03-03 00:55:22 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:22.637323 | orchestrator | 2026-03-03 00:55:22 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:22.637360 | orchestrator | 2026-03-03 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:25.680609 | orchestrator | 2026-03-03 00:55:25 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:25.683322 | orchestrator | 2026-03-03 00:55:25 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:25.685837 | orchestrator | 2026-03-03 00:55:25 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:25.685912 | orchestrator | 2026-03-03 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:28.722122 | orchestrator | 2026-03-03 00:55:28 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:28.723172 | orchestrator | 2026-03-03 00:55:28 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:28.724783 | orchestrator | 2026-03-03 00:55:28 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:28.724812 | orchestrator | 2026-03-03 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:31.770876 | orchestrator | 2026-03-03 00:55:31 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:31.772983 | orchestrator | 2026-03-03 00:55:31 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state STARTED 2026-03-03 00:55:31.775043 | orchestrator | 2026-03-03 00:55:31 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:31.775410 | orchestrator | 2026-03-03 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:34.808907 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:34.810241 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:34.810664 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:34.811267 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED[0m 2026-03-03 00:55:34.817716 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task 6973b3bb-2137-4f5f-8b73-23f4f2d0b14c is in state SUCCESS 2026-03-03 00:55:34.819118 | orchestrator | 2026-03-03 00:55:34.819161 | orchestrator | 2026-03-03 00:55:34.819170 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-03 00:55:34.819178 | orchestrator | 2026-03-03 00:55:34.819186 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-03 00:55:34.819193 | orchestrator | Tuesday 03 March 2026 00:53:22 +0000 (0:00:00.220) 0:00:00.220 ********* 2026-03-03 00:55:34.819201 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:55:34.819209 | orchestrator | 2026-03-03 00:55:34.819217 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-03 00:55:34.819224 | orchestrator | Tuesday 03 March 2026 00:53:23 +0000 (0:00:01.170) 0:00:01.390 ********* 2026-03-03 00:55:34.819231 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.819239 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.819246 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.819253 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.819260 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.819267 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.819275 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.820078 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.820113 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.820121 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820129 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-03 00:55:34.820137 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820145 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820153 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.820160 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.820168 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.820188 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-03 00:55:34.820196 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820203 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820210 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820218 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-03 00:55:34.820225 | orchestrator | 2026-03-03 00:55:34.820233 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-03 00:55:34.820240 | orchestrator | Tuesday 03 March 2026 00:53:27 +0000 (0:00:04.169) 0:00:05.560 ********* 2026-03-03 00:55:34.820248 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:55:34.820256 | orchestrator | 2026-03-03 00:55:34.820264 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-03 00:55:34.820271 | orchestrator | Tuesday 03 March 2026 00:53:29 +0000 (0:00:01.295) 0:00:06.855 ********* 2026-03-03 00:55:34.820282 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820368 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.820627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820682 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.820770 | orchestrator | 2026-03-03 00:55:34.820778 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-03 00:55:34.820785 | orchestrator | Tuesday 03 March 2026 00:53:35 +0000 (0:00:06.461) 0:00:13.317 ********* 2026-03-03 00:55:34.820793 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.820804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820812 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820819 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:55:34.820827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.820860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.820889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820905 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:55:34.820912 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:55:34.820923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.820931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820946 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:55:34.820957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.820970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.820985 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:55:34.820992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821018 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:55:34.821026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821061 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:55:34.821068 | orchestrator | 2026-03-03 00:55:34.821076 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-03 00:55:34.821083 | orchestrator | Tuesday 03 March 2026 00:53:37 +0000 (0:00:01.707) 0:00:15.025 ********* 2026-03-03 00:55:34.821091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821099 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821114 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:55:34.821124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821180 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:55:34.821187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821245 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:55:34.821253 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:55:34.821260 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:55:34.821268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821291 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:55:34.821298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-03 00:55:34.821306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.821327 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:55:34.821335 | orchestrator | 2026-03-03 00:55:34.821342 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-03 00:55:34.821353 | orchestrator | Tuesday 03 March 2026 00:53:40 +0000 (0:00:03.072) 0:00:18.098 ********* 2026-03-03 00:55:34.821361 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:55:34.821368 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:55:34.821375 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:55:34.821382 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:55:34.821390 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:55:34.821409 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:55:34.821421 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:55:34.821482 | orchestrator | 2026-03-03 00:55:34.821494 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-03 00:55:34.821506 | orchestrator | Tuesday 03 March 2026 00:53:41 +0000 (0:00:01.378) 0:00:19.476 ********* 2026-03-03 00:55:34.821518 | orchestrator | skipping: [testbed-manager] 2026-03-03 00:55:34.821531 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:55:34.821542 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:55:34.821555 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:55:34.821567 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:55:34.821580 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:55:34.821593 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:55:34.821606 | orchestrator | 2026-03-03 00:55:34.821618 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-03 00:55:34.821629 | orchestrator | Tuesday 03 March 2026 00:53:43 +0000 (0:00:01.602) 0:00:21.078 ********* 2026-03-03 00:55:34.821638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821654 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.821718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821737 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821829 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.821836 | orchestrator | 2026-03-03 00:55:34.821844 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-03 00:55:34.821851 | orchestrator | Tuesday 03 March 2026 00:53:50 +0000 (0:00:06.867) 0:00:27.945 ********* 2026-03-03 00:55:34.821859 | orchestrator | [WARNING]: Skipped 2026-03-03 00:55:34.821867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-03 00:55:34.821875 | orchestrator | to this access issue: 2026-03-03 00:55:34.821882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-03 00:55:34.821889 | orchestrator | directory 2026-03-03 00:55:34.821897 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 00:55:34.821904 | orchestrator | 2026-03-03 00:55:34.821911 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-03 00:55:34.821919 | orchestrator | Tuesday 03 March 2026 00:53:51 +0000 (0:00:01.287) 0:00:29.233 ********* 2026-03-03 00:55:34.821926 | orchestrator | [WARNING]: Skipped 2026-03-03 00:55:34.821933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-03 00:55:34.821944 | orchestrator | to this access issue: 2026-03-03 00:55:34.821952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-03 00:55:34.821959 | orchestrator | directory 2026-03-03 00:55:34.821967 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 00:55:34.821974 | orchestrator | 2026-03-03 00:55:34.821981 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-03 00:55:34.821987 | orchestrator | Tuesday 03 March 2026 00:53:52 +0000 (0:00:00.810) 0:00:30.043 ********* 2026-03-03 00:55:34.821994 | orchestrator | [WARNING]: Skipped 2026-03-03 00:55:34.822001 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-03 00:55:34.822007 | orchestrator | to this access issue: 2026-03-03 00:55:34.822052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-03 00:55:34.822061 | orchestrator | directory 2026-03-03 00:55:34.822068 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 00:55:34.822075 | orchestrator | 2026-03-03 00:55:34.822082 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-03 00:55:34.822088 | orchestrator | Tuesday 03 March 2026 00:53:53 +0000 (0:00:00.746) 0:00:30.790 ********* 2026-03-03 00:55:34.822095 | orchestrator | [WARNING]: Skipped 2026-03-03 00:55:34.822102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-03 00:55:34.822109 | orchestrator | to this access issue: 2026-03-03 00:55:34.822115 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-03 00:55:34.822126 | orchestrator | directory 2026-03-03 00:55:34.822133 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 00:55:34.822140 | orchestrator | 2026-03-03 00:55:34.822147 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-03 00:55:34.822153 | orchestrator | Tuesday 03 March 2026 00:53:54 +0000 (0:00:00.896) 0:00:31.686 ********* 2026-03-03 00:55:34.822160 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.822167 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.822174 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.822180 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.822187 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.822194 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.822200 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.822207 | orchestrator | 2026-03-03 00:55:34.822214 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-03 00:55:34.822221 | orchestrator | Tuesday 03 March 2026 00:53:58 +0000 (0:00:04.238) 0:00:35.924 ********* 2026-03-03 00:55:34.822227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822234 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822241 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822248 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822255 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822261 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822268 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-03 00:55:34.822275 | orchestrator | 2026-03-03 00:55:34.822285 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-03 00:55:34.822292 | orchestrator | Tuesday 03 March 2026 00:54:03 +0000 (0:00:04.972) 0:00:40.897 ********* 2026-03-03 00:55:34.822299 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.822305 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.822312 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.822319 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.822326 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.822332 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.822339 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.822346 | orchestrator | 2026-03-03 00:55:34.822352 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-03 00:55:34.822359 | orchestrator | Tuesday 03 March 2026 00:54:08 +0000 (0:00:05.297) 0:00:46.195 ********* 2026-03-03 00:55:34.822366 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822390 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822404 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822421 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822465 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822495 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822532 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 00:55:34.822558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822566 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822573 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822591 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822599 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822606 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822613 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822620 | orchestrator | 2026-03-03 00:55:34.822626 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-03 00:55:34.822633 | orchestrator | Tuesday 03 March 2026 00:54:10 +0000 (0:00:01.797) 0:00:47.993 ********* 2026-03-03 00:55:34.822640 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822647 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822660 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822667 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822673 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822680 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-03 00:55:34.822687 | orchestrator | 2026-03-03 00:55:34.822694 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-03 00:55:34.822706 | orchestrator | Tuesday 03 March 2026 00:54:13 +0000 (0:00:02.768) 0:00:50.761 ********* 2026-03-03 00:55:34.822713 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822719 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822726 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822733 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822740 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822750 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822757 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-03 00:55:34.822764 | orchestrator | 2026-03-03 00:55:34.822771 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-03 00:55:34.822777 | orchestrator | Tuesday 03 March 2026 00:54:16 +0000 (0:00:03.073) 0:00:53.835 ********* 2026-03-03 00:55:34.822784 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822803 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822825 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-03 00:55:34.822879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822932 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:55:34.822976 | orchestrator | 2026-03-03 00:55:34.822989 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-03 00:55:34.823000 | orchestrator | Tuesday 03 March 2026 00:54:20 +0000 (0:00:04.300) 0:00:58.136 ********* 2026-03-03 00:55:34.823018 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.823028 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.823039 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.823050 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.823060 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.823069 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.823080 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.823091 | orchestrator | 2026-03-03 00:55:34.823103 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-03 00:55:34.823119 | orchestrator | Tuesday 03 March 2026 00:54:22 +0000 (0:00:01.728) 0:00:59.865 ********* 2026-03-03 00:55:34.823131 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.823142 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.823154 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.823163 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.823170 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.823177 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.823183 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.823190 | orchestrator | 2026-03-03 00:55:34.823197 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823203 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:01.049) 0:01:00.914 ********* 2026-03-03 00:55:34.823210 | orchestrator | 2026-03-03 00:55:34.823217 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823224 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.062) 0:01:00.976 ********* 2026-03-03 00:55:34.823231 | orchestrator | 2026-03-03 00:55:34.823237 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823244 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.059) 0:01:01.035 ********* 2026-03-03 00:55:34.823251 | orchestrator | 2026-03-03 00:55:34.823257 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823264 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.178) 0:01:01.214 ********* 2026-03-03 00:55:34.823271 | orchestrator | 2026-03-03 00:55:34.823277 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823284 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.059) 0:01:01.274 ********* 2026-03-03 00:55:34.823291 | orchestrator | 2026-03-03 00:55:34.823297 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823304 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.055) 0:01:01.330 ********* 2026-03-03 00:55:34.823310 | orchestrator | 2026-03-03 00:55:34.823317 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-03 00:55:34.823324 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.060) 0:01:01.391 ********* 2026-03-03 00:55:34.823330 | orchestrator | 2026-03-03 00:55:34.823337 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-03 00:55:34.823350 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.080) 0:01:01.471 ********* 2026-03-03 00:55:34.823357 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.823363 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.823370 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.823377 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.823383 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.823390 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.823397 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.823404 | orchestrator | 2026-03-03 00:55:34.823415 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-03 00:55:34.823446 | orchestrator | Tuesday 03 March 2026 00:54:50 +0000 (0:00:26.467) 0:01:27.938 ********* 2026-03-03 00:55:34.823460 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.823471 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.823481 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.823500 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.823509 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.823519 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.823530 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.823540 | orchestrator | 2026-03-03 00:55:34.823552 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-03 00:55:34.823565 | orchestrator | Tuesday 03 March 2026 00:55:22 +0000 (0:00:31.949) 0:01:59.888 ********* 2026-03-03 00:55:34.823576 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:34.823588 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:34.823595 | orchestrator | ok: [testbed-manager] 2026-03-03 00:55:34.823602 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:34.823609 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:55:34.823615 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:55:34.823622 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:55:34.823629 | orchestrator | 2026-03-03 00:55:34.823635 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-03 00:55:34.823642 | orchestrator | Tuesday 03 March 2026 00:55:24 +0000 (0:00:02.022) 0:02:01.910 ********* 2026-03-03 00:55:34.823649 | orchestrator | changed: [testbed-manager] 2026-03-03 00:55:34.823655 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:55:34.823662 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:55:34.823669 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:55:34.823675 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:34.823682 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:34.823688 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:34.823695 | orchestrator | 2026-03-03 00:55:34.823702 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:55:34.823709 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823716 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823727 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823738 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823745 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823757 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823763 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-03 00:55:34.823770 | orchestrator | 2026-03-03 00:55:34.823777 | orchestrator | 2026-03-03 00:55:34.823784 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:55:34.823791 | orchestrator | Tuesday 03 March 2026 00:55:33 +0000 (0:00:09.007) 0:02:10.918 ********* 2026-03-03 00:55:34.823797 | orchestrator | =============================================================================== 2026-03-03 00:55:34.823804 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.95s 2026-03-03 00:55:34.823811 | orchestrator | common : Restart fluentd container ------------------------------------- 26.47s 2026-03-03 00:55:34.823817 | orchestrator | common : Restart cron container ----------------------------------------- 9.01s 2026-03-03 00:55:34.823824 | orchestrator | common : Copying over config.json files for services -------------------- 6.87s 2026-03-03 00:55:34.823831 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.46s 2026-03-03 00:55:34.823837 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 5.30s 2026-03-03 00:55:34.823849 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.97s 2026-03-03 00:55:34.823856 | orchestrator | common : Check common containers ---------------------------------------- 4.30s 2026-03-03 00:55:34.823862 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.24s 2026-03-03 00:55:34.823869 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.17s 2026-03-03 00:55:34.823876 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.07s 2026-03-03 00:55:34.823882 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.07s 2026-03-03 00:55:34.823889 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.77s 2026-03-03 00:55:34.823896 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.02s 2026-03-03 00:55:34.823908 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.80s 2026-03-03 00:55:34.823915 | orchestrator | common : Creating log volume -------------------------------------------- 1.73s 2026-03-03 00:55:34.823922 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.71s 2026-03-03 00:55:34.823929 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.60s 2026-03-03 00:55:34.823936 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.38s 2026-03-03 00:55:34.823943 | orchestrator | common : include_tasks -------------------------------------------------- 1.30s 2026-03-03 00:55:34.823949 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:34.823957 | orchestrator | 2026-03-03 00:55:34 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:34.823963 | orchestrator | 2026-03-03 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:37.845338 | orchestrator | 2026-03-03 00:55:37 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:37.845593 | orchestrator | 2026-03-03 00:55:37 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:37.846267 | orchestrator | 2026-03-03 00:55:37 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:37.847119 | orchestrator | 2026-03-03 00:55:37 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED 2026-03-03 00:55:37.847795 | orchestrator | 2026-03-03 00:55:37 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:37.848595 | orchestrator | 2026-03-03 00:55:37 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:37.848631 | orchestrator | 2026-03-03 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:40.874197 | orchestrator | 2026-03-03 00:55:40 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:40.874274 | orchestrator | 2026-03-03 00:55:40 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:40.874995 | orchestrator | 2026-03-03 00:55:40 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:40.875665 | orchestrator | 2026-03-03 00:55:40 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED 2026-03-03 00:55:40.876161 | orchestrator | 2026-03-03 00:55:40 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:40.876769 | orchestrator | 2026-03-03 00:55:40 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:40.876801 | orchestrator | 2026-03-03 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:43.901876 | orchestrator | 2026-03-03 00:55:43 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:43.903591 | orchestrator | 2026-03-03 00:55:43 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:43.903881 | orchestrator | 2026-03-03 00:55:43 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:43.904607 | orchestrator | 2026-03-03 00:55:43 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED 2026-03-03 00:55:43.905128 | orchestrator | 2026-03-03 00:55:43 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:43.906756 | orchestrator | 2026-03-03 00:55:43 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:43.906801 | orchestrator | 2026-03-03 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:46.943432 | orchestrator | 2026-03-03 00:55:46 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:46.945524 | orchestrator | 2026-03-03 00:55:46 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:46.947777 | orchestrator | 2026-03-03 00:55:46 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:46.950003 | orchestrator | 2026-03-03 00:55:46 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED 2026-03-03 00:55:46.951955 | orchestrator | 2026-03-03 00:55:46 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:46.953666 | orchestrator | 2026-03-03 00:55:46 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:46.953745 | orchestrator | 2026-03-03 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:50.005739 | orchestrator | 2026-03-03 00:55:50 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:50.007165 | orchestrator | 2026-03-03 00:55:50 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:50.007895 | orchestrator | 2026-03-03 00:55:50 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:50.008876 | orchestrator | 2026-03-03 00:55:50 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED 2026-03-03 00:55:50.009830 | orchestrator | 2026-03-03 00:55:50 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:50.010748 | orchestrator | 2026-03-03 00:55:50 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:50.010780 | orchestrator | 2026-03-03 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:53.110714 | orchestrator | 2026-03-03 00:55:53 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:53.113429 | orchestrator | 2026-03-03 00:55:53 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:53.114253 | orchestrator | 2026-03-03 00:55:53 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:53.115191 | orchestrator | 2026-03-03 00:55:53 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state STARTED 2026-03-03 00:55:53.116039 | orchestrator | 2026-03-03 00:55:53 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:53.116765 | orchestrator | 2026-03-03 00:55:53 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:53.116801 | orchestrator | 2026-03-03 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:56.177266 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:56.177404 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state STARTED 2026-03-03 00:55:56.177416 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:56.177423 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task 71a0a104-bdd5-4ab4-b536-253bbc50fd6f is in state SUCCESS 2026-03-03 00:55:56.177650 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:56.178648 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:56.179863 | orchestrator | 2026-03-03 00:55:56 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:55:56.179918 | orchestrator | 2026-03-03 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:55:59.210199 | orchestrator | 2026-03-03 00:55:59 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:55:59.210474 | orchestrator | 2026-03-03 00:55:59 | INFO  | Task 93e3182e-1de1-4343-8ae8-401dee2b4c35 is in state SUCCESS 2026-03-03 00:55:59.212210 | orchestrator | 2026-03-03 00:55:59.212254 | orchestrator | 2026-03-03 00:55:59.212262 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:55:59.212273 | orchestrator | 2026-03-03 00:55:59.212281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 00:55:59.212287 | orchestrator | Tuesday 03 March 2026 00:55:39 +0000 (0:00:00.230) 0:00:00.230 ********* 2026-03-03 00:55:59.212294 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:59.212301 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:59.212307 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:59.212315 | orchestrator | 2026-03-03 00:55:59.212325 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 00:55:59.212334 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.417) 0:00:00.648 ********* 2026-03-03 00:55:59.212342 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-03 00:55:59.212349 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-03 00:55:59.212355 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-03 00:55:59.212361 | orchestrator | 2026-03-03 00:55:59.212412 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-03 00:55:59.212418 | orchestrator | 2026-03-03 00:55:59.212425 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-03 00:55:59.212507 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.459) 0:00:01.107 ********* 2026-03-03 00:55:59.212517 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:55:59.212527 | orchestrator | 2026-03-03 00:55:59.212536 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-03 00:55:59.212543 | orchestrator | Tuesday 03 March 2026 00:55:41 +0000 (0:00:00.501) 0:00:01.608 ********* 2026-03-03 00:55:59.212550 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-03 00:55:59.212557 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-03 00:55:59.212564 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-03 00:55:59.212572 | orchestrator | 2026-03-03 00:55:59.212578 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-03 00:55:59.212586 | orchestrator | Tuesday 03 March 2026 00:55:42 +0000 (0:00:01.015) 0:00:02.624 ********* 2026-03-03 00:55:59.212593 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-03 00:55:59.212601 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-03 00:55:59.212606 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-03 00:55:59.212627 | orchestrator | 2026-03-03 00:55:59.212631 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-03 00:55:59.212635 | orchestrator | Tuesday 03 March 2026 00:55:44 +0000 (0:00:01.963) 0:00:04.587 ********* 2026-03-03 00:55:59.212640 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:59.212644 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:59.212648 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:59.212651 | orchestrator | 2026-03-03 00:55:59.212655 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-03 00:55:59.212659 | orchestrator | Tuesday 03 March 2026 00:55:46 +0000 (0:00:01.851) 0:00:06.439 ********* 2026-03-03 00:55:59.212663 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:59.212667 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:59.212671 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:59.212675 | orchestrator | 2026-03-03 00:55:59.212679 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:55:59.212683 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:59.212689 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:59.212693 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:59.212697 | orchestrator | 2026-03-03 00:55:59.212700 | orchestrator | 2026-03-03 00:55:59.212704 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:55:59.212708 | orchestrator | Tuesday 03 March 2026 00:55:53 +0000 (0:00:07.028) 0:00:13.467 ********* 2026-03-03 00:55:59.212712 | orchestrator | =============================================================================== 2026-03-03 00:55:59.212716 | orchestrator | memcached : Restart memcached container --------------------------------- 7.03s 2026-03-03 00:55:59.212720 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.96s 2026-03-03 00:55:59.212724 | orchestrator | memcached : Check memcached container ----------------------------------- 1.85s 2026-03-03 00:55:59.212727 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.02s 2026-03-03 00:55:59.212731 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2026-03-03 00:55:59.212735 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-03-03 00:55:59.212739 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-03-03 00:55:59.212742 | orchestrator | 2026-03-03 00:55:59.212746 | orchestrator | 2026-03-03 00:55:59.212750 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:55:59.212754 | orchestrator | 2026-03-03 00:55:59.212757 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 00:55:59.212767 | orchestrator | Tuesday 03 March 2026 00:55:39 +0000 (0:00:00.413) 0:00:00.413 ********* 2026-03-03 00:55:59.212771 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:55:59.212775 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:55:59.212779 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:55:59.212783 | orchestrator | 2026-03-03 00:55:59.212787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 00:55:59.212802 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.403) 0:00:00.817 ********* 2026-03-03 00:55:59.212806 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-03 00:55:59.212810 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-03 00:55:59.212814 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-03 00:55:59.212818 | orchestrator | 2026-03-03 00:55:59.212821 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-03 00:55:59.212825 | orchestrator | 2026-03-03 00:55:59.212829 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-03 00:55:59.212837 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.673) 0:00:01.490 ********* 2026-03-03 00:55:59.212841 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:55:59.212845 | orchestrator | 2026-03-03 00:55:59.212849 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-03 00:55:59.212852 | orchestrator | Tuesday 03 March 2026 00:55:41 +0000 (0:00:00.722) 0:00:02.213 ********* 2026-03-03 00:55:59.212858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212897 | orchestrator | 2026-03-03 00:55:59.212901 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-03 00:55:59.212905 | orchestrator | Tuesday 03 March 2026 00:55:43 +0000 (0:00:01.569) 0:00:03.782 ********* 2026-03-03 00:55:59.212909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212943 | orchestrator | 2026-03-03 00:55:59.212947 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-03 00:55:59.212951 | orchestrator | Tuesday 03 March 2026 00:55:45 +0000 (0:00:02.799) 0:00:06.582 ********* 2026-03-03 00:55:59.212955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.212967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213059 | orchestrator | 2026-03-03 00:55:59.213069 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-03 00:55:59.213076 | orchestrator | Tuesday 03 March 2026 00:55:48 +0000 (0:00:02.579) 0:00:09.162 ********* 2026-03-03 00:55:59.213092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-03 00:55:59.213127 | orchestrator | 2026-03-03 00:55:59.213133 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-03 00:55:59.213137 | orchestrator | Tuesday 03 March 2026 00:55:49 +0000 (0:00:01.600) 0:00:10.762 ********* 2026-03-03 00:55:59.213141 | orchestrator | 2026-03-03 00:55:59.213145 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-03 00:55:59.213151 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.084) 0:00:10.847 ********* 2026-03-03 00:55:59.213155 | orchestrator | 2026-03-03 00:55:59.213159 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-03 00:55:59.213163 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.062) 0:00:10.910 ********* 2026-03-03 00:55:59.213167 | orchestrator | 2026-03-03 00:55:59.213170 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-03 00:55:59.213174 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.067) 0:00:10.978 ********* 2026-03-03 00:55:59.213178 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:59.213182 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:59.213186 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:59.213189 | orchestrator | 2026-03-03 00:55:59.213193 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-03 00:55:59.213197 | orchestrator | Tuesday 03 March 2026 00:55:54 +0000 (0:00:03.841) 0:00:14.819 ********* 2026-03-03 00:55:59.213201 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:55:59.213204 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:55:59.213208 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:55:59.213212 | orchestrator | 2026-03-03 00:55:59.213216 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:55:59.213220 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:59.213224 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:59.213227 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:55:59.213231 | orchestrator | 2026-03-03 00:55:59.213235 | orchestrator | 2026-03-03 00:55:59.213239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:55:59.213243 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:03.444) 0:00:18.264 ********* 2026-03-03 00:55:59.213246 | orchestrator | =============================================================================== 2026-03-03 00:55:59.213250 | orchestrator | redis : Restart redis container ----------------------------------------- 3.84s 2026-03-03 00:55:59.213254 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.45s 2026-03-03 00:55:59.213258 | orchestrator | redis : Copying over default config.json files -------------------------- 2.80s 2026-03-03 00:55:59.213261 | orchestrator | redis : Copying over redis config files --------------------------------- 2.58s 2026-03-03 00:55:59.213265 | orchestrator | redis : Check redis containers ------------------------------------------ 1.60s 2026-03-03 00:55:59.213269 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.57s 2026-03-03 00:55:59.213273 | orchestrator | redis : include_tasks --------------------------------------------------- 0.72s 2026-03-03 00:55:59.213276 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-03-03 00:55:59.213284 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2026-03-03 00:55:59.213288 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-03-03 00:55:59.213292 | orchestrator | 2026-03-03 00:55:59 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:55:59.213296 | orchestrator | 2026-03-03 00:55:59 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:55:59.213300 | orchestrator | 2026-03-03 00:55:59 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:55:59.213305 | orchestrator | 2026-03-03 00:55:59 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:55:59.213310 | orchestrator | 2026-03-03 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:02.240760 | orchestrator | 2026-03-03 00:56:02 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:02.241151 | orchestrator | 2026-03-03 00:56:02 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:02.242111 | orchestrator | 2026-03-03 00:56:02 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:02.244104 | orchestrator | 2026-03-03 00:56:02 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:02.246197 | orchestrator | 2026-03-03 00:56:02 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:02.246234 | orchestrator | 2026-03-03 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:05.329382 | orchestrator | 2026-03-03 00:56:05 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:05.332540 | orchestrator | 2026-03-03 00:56:05 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:05.336466 | orchestrator | 2026-03-03 00:56:05 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:05.340544 | orchestrator | 2026-03-03 00:56:05 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:05.347064 | orchestrator | 2026-03-03 00:56:05 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:05.347110 | orchestrator | 2026-03-03 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:08.374760 | orchestrator | 2026-03-03 00:56:08 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:08.374839 | orchestrator | 2026-03-03 00:56:08 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:08.374850 | orchestrator | 2026-03-03 00:56:08 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:08.374857 | orchestrator | 2026-03-03 00:56:08 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:08.375148 | orchestrator | 2026-03-03 00:56:08 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:08.375164 | orchestrator | 2026-03-03 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:11.402834 | orchestrator | 2026-03-03 00:56:11 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:11.403671 | orchestrator | 2026-03-03 00:56:11 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:11.404662 | orchestrator | 2026-03-03 00:56:11 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:11.405505 | orchestrator | 2026-03-03 00:56:11 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:11.406272 | orchestrator | 2026-03-03 00:56:11 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:11.406523 | orchestrator | 2026-03-03 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:14.435076 | orchestrator | 2026-03-03 00:56:14 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:14.435288 | orchestrator | 2026-03-03 00:56:14 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:14.436170 | orchestrator | 2026-03-03 00:56:14 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:14.436648 | orchestrator | 2026-03-03 00:56:14 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:14.437103 | orchestrator | 2026-03-03 00:56:14 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:14.437166 | orchestrator | 2026-03-03 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:17.465992 | orchestrator | 2026-03-03 00:56:17 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:17.466215 | orchestrator | 2026-03-03 00:56:17 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:17.469216 | orchestrator | 2026-03-03 00:56:17 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:17.469681 | orchestrator | 2026-03-03 00:56:17 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:17.470544 | orchestrator | 2026-03-03 00:56:17 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:17.470581 | orchestrator | 2026-03-03 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:20.580065 | orchestrator | 2026-03-03 00:56:20 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:20.580283 | orchestrator | 2026-03-03 00:56:20 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:20.582798 | orchestrator | 2026-03-03 00:56:20 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:20.587655 | orchestrator | 2026-03-03 00:56:20 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:20.590545 | orchestrator | 2026-03-03 00:56:20 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:20.590654 | orchestrator | 2026-03-03 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:23.659077 | orchestrator | 2026-03-03 00:56:23 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:23.659139 | orchestrator | 2026-03-03 00:56:23 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:23.659145 | orchestrator | 2026-03-03 00:56:23 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:23.659149 | orchestrator | 2026-03-03 00:56:23 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:23.663644 | orchestrator | 2026-03-03 00:56:23 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:23.663714 | orchestrator | 2026-03-03 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:26.690888 | orchestrator | 2026-03-03 00:56:26 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:26.692225 | orchestrator | 2026-03-03 00:56:26 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:26.693680 | orchestrator | 2026-03-03 00:56:26 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:26.695372 | orchestrator | 2026-03-03 00:56:26 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:26.696898 | orchestrator | 2026-03-03 00:56:26 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:26.696941 | orchestrator | 2026-03-03 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:29.733044 | orchestrator | 2026-03-03 00:56:29 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:29.734442 | orchestrator | 2026-03-03 00:56:29 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:29.734684 | orchestrator | 2026-03-03 00:56:29 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:29.735253 | orchestrator | 2026-03-03 00:56:29 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:29.737208 | orchestrator | 2026-03-03 00:56:29 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:29.737249 | orchestrator | 2026-03-03 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:32.782121 | orchestrator | 2026-03-03 00:56:32 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:32.782213 | orchestrator | 2026-03-03 00:56:32 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:32.782222 | orchestrator | 2026-03-03 00:56:32 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:32.782228 | orchestrator | 2026-03-03 00:56:32 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:32.782232 | orchestrator | 2026-03-03 00:56:32 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:32.782237 | orchestrator | 2026-03-03 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:35.804730 | orchestrator | 2026-03-03 00:56:35 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:35.804803 | orchestrator | 2026-03-03 00:56:35 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:35.805856 | orchestrator | 2026-03-03 00:56:35 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:35.806389 | orchestrator | 2026-03-03 00:56:35 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:35.807176 | orchestrator | 2026-03-03 00:56:35 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:35.807222 | orchestrator | 2026-03-03 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:38.832549 | orchestrator | 2026-03-03 00:56:38 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:38.832970 | orchestrator | 2026-03-03 00:56:38 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:38.833771 | orchestrator | 2026-03-03 00:56:38 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:38.834482 | orchestrator | 2026-03-03 00:56:38 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state STARTED 2026-03-03 00:56:38.835134 | orchestrator | 2026-03-03 00:56:38 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:38.835156 | orchestrator | 2026-03-03 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:41.911032 | orchestrator | 2026-03-03 00:56:41 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:41.911151 | orchestrator | 2026-03-03 00:56:41 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:41.911736 | orchestrator | 2026-03-03 00:56:41 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:41.915864 | orchestrator | 2026-03-03 00:56:41 | INFO  | Task 3d0bc0ce-b236-4b1a-81e5-c6b1282795a0 is in state SUCCESS 2026-03-03 00:56:41.917544 | orchestrator | 2026-03-03 00:56:41.917591 | orchestrator | 2026-03-03 00:56:41.917600 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:56:41.917607 | orchestrator | 2026-03-03 00:56:41.917614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 00:56:41.917621 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.412) 0:00:00.412 ********* 2026-03-03 00:56:41.917628 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:56:41.917635 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:56:41.917641 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:56:41.917648 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:56:41.917655 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:56:41.917661 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:56:41.917668 | orchestrator | 2026-03-03 00:56:41.917674 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 00:56:41.917680 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.933) 0:00:01.345 ********* 2026-03-03 00:56:41.917687 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-03 00:56:41.917694 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-03 00:56:41.917701 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-03 00:56:41.917707 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-03 00:56:41.917714 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-03 00:56:41.917720 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-03 00:56:41.917726 | orchestrator | 2026-03-03 00:56:41.917732 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-03 00:56:41.917739 | orchestrator | 2026-03-03 00:56:41.917746 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-03 00:56:41.917753 | orchestrator | Tuesday 03 March 2026 00:55:41 +0000 (0:00:00.710) 0:00:02.056 ********* 2026-03-03 00:56:41.917762 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:56:41.917771 | orchestrator | 2026-03-03 00:56:41.917777 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-03 00:56:41.917783 | orchestrator | Tuesday 03 March 2026 00:55:42 +0000 (0:00:01.130) 0:00:03.187 ********* 2026-03-03 00:56:41.917790 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-03 00:56:41.917797 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-03 00:56:41.917804 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-03 00:56:41.917810 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-03 00:56:41.917817 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-03 00:56:41.917823 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-03 00:56:41.917830 | orchestrator | 2026-03-03 00:56:41.917836 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-03 00:56:41.917842 | orchestrator | Tuesday 03 March 2026 00:55:43 +0000 (0:00:01.148) 0:00:04.335 ********* 2026-03-03 00:56:41.917848 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-03 00:56:41.917855 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-03 00:56:41.917861 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-03 00:56:41.917886 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-03 00:56:41.917893 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-03 00:56:41.917899 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-03 00:56:41.917904 | orchestrator | 2026-03-03 00:56:41.917910 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-03 00:56:41.917917 | orchestrator | Tuesday 03 March 2026 00:55:45 +0000 (0:00:01.658) 0:00:05.993 ********* 2026-03-03 00:56:41.917923 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-03 00:56:41.917929 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:56:41.917936 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-03 00:56:41.917945 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:56:41.917951 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-03 00:56:41.917957 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:56:41.917963 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-03 00:56:41.917981 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:56:41.917988 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-03 00:56:41.917994 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:56:41.918000 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-03 00:56:41.918007 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:56:41.918068 | orchestrator | 2026-03-03 00:56:41.918080 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-03 00:56:41.918087 | orchestrator | Tuesday 03 March 2026 00:55:46 +0000 (0:00:01.379) 0:00:07.373 ********* 2026-03-03 00:56:41.918094 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:56:41.918100 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:56:41.918107 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:56:41.918114 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:56:41.918121 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:56:41.918136 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:56:41.918142 | orchestrator | 2026-03-03 00:56:41.918150 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-03 00:56:41.918157 | orchestrator | Tuesday 03 March 2026 00:55:47 +0000 (0:00:00.655) 0:00:08.028 ********* 2026-03-03 00:56:41.918181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918334 | orchestrator | 2026-03-03 00:56:41.918340 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-03 00:56:41.918347 | orchestrator | Tuesday 03 March 2026 00:55:48 +0000 (0:00:01.252) 0:00:09.280 ********* 2026-03-03 00:56:41.918354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918461 | orchestrator | 2026-03-03 00:56:41.918468 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-03 00:56:41.918474 | orchestrator | Tuesday 03 March 2026 00:55:51 +0000 (0:00:02.900) 0:00:12.181 ********* 2026-03-03 00:56:41.918480 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:56:41.918492 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:56:41.918498 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:56:41.918505 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:56:41.918511 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:56:41.918517 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:56:41.918523 | orchestrator | 2026-03-03 00:56:41.918530 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-03 00:56:41.918536 | orchestrator | Tuesday 03 March 2026 00:55:53 +0000 (0:00:01.383) 0:00:13.564 ********* 2026-03-03 00:56:41.918543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-03 00:56:41.918648 | orchestrator | 2026-03-03 00:56:41.918652 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-03 00:56:41.918655 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:03.198) 0:00:16.762 ********* 2026-03-03 00:56:41.918659 | orchestrator | 2026-03-03 00:56:41.918663 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-03 00:56:41.918667 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:00.286) 0:00:17.049 ********* 2026-03-03 00:56:41.918671 | orchestrator | 2026-03-03 00:56:41.918674 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-03 00:56:41.918678 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:00.191) 0:00:17.240 ********* 2026-03-03 00:56:41.918682 | orchestrator | 2026-03-03 00:56:41.918686 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-03 00:56:41.918690 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.160) 0:00:17.401 ********* 2026-03-03 00:56:41.918694 | orchestrator | 2026-03-03 00:56:41.918697 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-03 00:56:41.918701 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.176) 0:00:17.578 ********* 2026-03-03 00:56:41.918705 | orchestrator | 2026-03-03 00:56:41.918709 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-03 00:56:41.918712 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.125) 0:00:17.704 ********* 2026-03-03 00:56:41.918716 | orchestrator | 2026-03-03 00:56:41.918720 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-03 00:56:41.918724 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.170) 0:00:17.875 ********* 2026-03-03 00:56:41.918728 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:56:41.918732 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:56:41.918736 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:56:41.918739 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:56:41.918743 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:56:41.918747 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:56:41.918751 | orchestrator | 2026-03-03 00:56:41.918755 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-03 00:56:41.918759 | orchestrator | Tuesday 03 March 2026 00:56:05 +0000 (0:00:07.782) 0:00:25.657 ********* 2026-03-03 00:56:41.918763 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:56:41.918767 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:56:41.918771 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:56:41.918775 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:56:41.918779 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:56:41.918782 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:56:41.918786 | orchestrator | 2026-03-03 00:56:41.918790 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-03 00:56:41.918798 | orchestrator | Tuesday 03 March 2026 00:56:06 +0000 (0:00:01.481) 0:00:27.138 ********* 2026-03-03 00:56:41.918802 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:56:41.918808 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:56:41.918815 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:56:41.918821 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:56:41.918827 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:56:41.918833 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:56:41.918839 | orchestrator | 2026-03-03 00:56:41.918845 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-03 00:56:41.918852 | orchestrator | Tuesday 03 March 2026 00:56:17 +0000 (0:00:10.280) 0:00:37.419 ********* 2026-03-03 00:56:41.918859 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-03 00:56:41.918869 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-03 00:56:41.918875 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-03 00:56:41.918882 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-03 00:56:41.918889 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-03 00:56:41.918899 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-03 00:56:41.918905 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-03 00:56:41.918911 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-03 00:56:41.918917 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-03 00:56:41.918924 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-03 00:56:41.918930 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-03 00:56:41.918938 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-03 00:56:41.918943 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-03 00:56:41.918947 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-03 00:56:41.918952 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-03 00:56:41.918956 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-03 00:56:41.918960 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-03 00:56:41.918965 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-03 00:56:41.918969 | orchestrator | 2026-03-03 00:56:41.918973 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-03 00:56:41.918978 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:08.510) 0:00:45.930 ********* 2026-03-03 00:56:41.918982 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-03 00:56:41.918988 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:56:41.918995 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-03 00:56:41.919001 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:56:41.919008 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-03 00:56:41.919020 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:56:41.919026 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-03 00:56:41.919033 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-03 00:56:41.919039 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-03 00:56:41.919046 | orchestrator | 2026-03-03 00:56:41.919051 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-03 00:56:41.919056 | orchestrator | Tuesday 03 March 2026 00:56:28 +0000 (0:00:02.711) 0:00:48.641 ********* 2026-03-03 00:56:41.919061 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-03 00:56:41.919065 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:56:41.919072 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-03 00:56:41.919078 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:56:41.919084 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-03 00:56:41.919090 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:56:41.919097 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-03 00:56:41.919104 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-03 00:56:41.919111 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-03 00:56:41.919118 | orchestrator | 2026-03-03 00:56:41.919124 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-03 00:56:41.919131 | orchestrator | Tuesday 03 March 2026 00:56:32 +0000 (0:00:04.099) 0:00:52.741 ********* 2026-03-03 00:56:41.919137 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:56:41.919149 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:56:41.919156 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:56:41.919163 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:56:41.919170 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:56:41.919176 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:56:41.919183 | orchestrator | 2026-03-03 00:56:41.919189 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:56:41.919200 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 00:56:41.919207 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 00:56:41.919219 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 00:56:41.919225 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 00:56:41.919231 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 00:56:41.919243 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 00:56:41.919250 | orchestrator | 2026-03-03 00:56:41.919256 | orchestrator | 2026-03-03 00:56:41.919282 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:56:41.919288 | orchestrator | Tuesday 03 March 2026 00:56:40 +0000 (0:00:08.484) 0:01:01.225 ********* 2026-03-03 00:56:41.919293 | orchestrator | =============================================================================== 2026-03-03 00:56:41.919299 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.76s 2026-03-03 00:56:41.919304 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.51s 2026-03-03 00:56:41.919309 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.78s 2026-03-03 00:56:41.919315 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.10s 2026-03-03 00:56:41.919327 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.20s 2026-03-03 00:56:41.919334 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.90s 2026-03-03 00:56:41.919340 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.71s 2026-03-03 00:56:41.919346 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.66s 2026-03-03 00:56:41.919352 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.48s 2026-03-03 00:56:41.919358 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.38s 2026-03-03 00:56:41.919365 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.38s 2026-03-03 00:56:41.919371 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.25s 2026-03-03 00:56:41.919377 | orchestrator | module-load : Load modules ---------------------------------------------- 1.15s 2026-03-03 00:56:41.919384 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.13s 2026-03-03 00:56:41.919390 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.11s 2026-03-03 00:56:41.919396 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-03-03 00:56:41.919402 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-03-03 00:56:41.919409 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.66s 2026-03-03 00:56:41.919416 | orchestrator | 2026-03-03 00:56:41 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:41.919422 | orchestrator | 2026-03-03 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:44.963208 | orchestrator | 2026-03-03 00:56:44 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:44.963319 | orchestrator | 2026-03-03 00:56:44 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:44.963627 | orchestrator | 2026-03-03 00:56:44 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:56:44.964931 | orchestrator | 2026-03-03 00:56:44 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:44.965823 | orchestrator | 2026-03-03 00:56:44 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:44.965855 | orchestrator | 2026-03-03 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:48.048180 | orchestrator | 2026-03-03 00:56:48 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:48.050143 | orchestrator | 2026-03-03 00:56:48 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:48.053047 | orchestrator | 2026-03-03 00:56:48 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:56:48.053836 | orchestrator | 2026-03-03 00:56:48 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:48.054434 | orchestrator | 2026-03-03 00:56:48 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:48.054481 | orchestrator | 2026-03-03 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:51.116546 | orchestrator | 2026-03-03 00:56:51 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:51.116615 | orchestrator | 2026-03-03 00:56:51 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:51.116636 | orchestrator | 2026-03-03 00:56:51 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:56:51.116640 | orchestrator | 2026-03-03 00:56:51 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:51.116660 | orchestrator | 2026-03-03 00:56:51 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:51.116664 | orchestrator | 2026-03-03 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:54.147355 | orchestrator | 2026-03-03 00:56:54 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:54.147866 | orchestrator | 2026-03-03 00:56:54 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:54.148886 | orchestrator | 2026-03-03 00:56:54 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:56:54.149392 | orchestrator | 2026-03-03 00:56:54 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:54.150162 | orchestrator | 2026-03-03 00:56:54 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:54.150186 | orchestrator | 2026-03-03 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:56:57.179573 | orchestrator | 2026-03-03 00:56:57 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:56:57.182400 | orchestrator | 2026-03-03 00:56:57 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:56:57.182976 | orchestrator | 2026-03-03 00:56:57 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:56:57.185497 | orchestrator | 2026-03-03 00:56:57 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:56:57.186130 | orchestrator | 2026-03-03 00:56:57 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:56:57.186177 | orchestrator | 2026-03-03 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:00.223445 | orchestrator | 2026-03-03 00:57:00 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:00.224027 | orchestrator | 2026-03-03 00:57:00 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:00.225149 | orchestrator | 2026-03-03 00:57:00 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:00.226629 | orchestrator | 2026-03-03 00:57:00 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:00.227384 | orchestrator | 2026-03-03 00:57:00 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:00.227405 | orchestrator | 2026-03-03 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:03.295655 | orchestrator | 2026-03-03 00:57:03 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:03.295721 | orchestrator | 2026-03-03 00:57:03 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:03.296466 | orchestrator | 2026-03-03 00:57:03 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:03.297333 | orchestrator | 2026-03-03 00:57:03 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:03.298395 | orchestrator | 2026-03-03 00:57:03 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:03.298548 | orchestrator | 2026-03-03 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:06.330561 | orchestrator | 2026-03-03 00:57:06 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:06.332954 | orchestrator | 2026-03-03 00:57:06 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:06.333336 | orchestrator | 2026-03-03 00:57:06 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:06.335445 | orchestrator | 2026-03-03 00:57:06 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:06.335891 | orchestrator | 2026-03-03 00:57:06 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:06.335981 | orchestrator | 2026-03-03 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:09.376400 | orchestrator | 2026-03-03 00:57:09 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:09.377136 | orchestrator | 2026-03-03 00:57:09 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:09.378256 | orchestrator | 2026-03-03 00:57:09 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:09.379580 | orchestrator | 2026-03-03 00:57:09 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:09.381120 | orchestrator | 2026-03-03 00:57:09 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:09.381293 | orchestrator | 2026-03-03 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:12.427110 | orchestrator | 2026-03-03 00:57:12 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:12.430432 | orchestrator | 2026-03-03 00:57:12 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:12.432252 | orchestrator | 2026-03-03 00:57:12 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:12.434286 | orchestrator | 2026-03-03 00:57:12 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:12.436189 | orchestrator | 2026-03-03 00:57:12 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:12.436449 | orchestrator | 2026-03-03 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:15.475940 | orchestrator | 2026-03-03 00:57:15 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:15.476440 | orchestrator | 2026-03-03 00:57:15 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:15.477609 | orchestrator | 2026-03-03 00:57:15 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:15.479403 | orchestrator | 2026-03-03 00:57:15 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:15.480342 | orchestrator | 2026-03-03 00:57:15 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:15.480384 | orchestrator | 2026-03-03 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:18.513489 | orchestrator | 2026-03-03 00:57:18 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:18.513551 | orchestrator | 2026-03-03 00:57:18 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:18.514384 | orchestrator | 2026-03-03 00:57:18 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:18.515022 | orchestrator | 2026-03-03 00:57:18 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:18.516107 | orchestrator | 2026-03-03 00:57:18 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:18.516142 | orchestrator | 2026-03-03 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:21.555004 | orchestrator | 2026-03-03 00:57:21 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:21.556960 | orchestrator | 2026-03-03 00:57:21 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:21.556999 | orchestrator | 2026-03-03 00:57:21 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:21.558748 | orchestrator | 2026-03-03 00:57:21 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:21.559857 | orchestrator | 2026-03-03 00:57:21 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:21.559962 | orchestrator | 2026-03-03 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:24.602293 | orchestrator | 2026-03-03 00:57:24 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:24.607833 | orchestrator | 2026-03-03 00:57:24 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:24.611086 | orchestrator | 2026-03-03 00:57:24 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:24.613282 | orchestrator | 2026-03-03 00:57:24 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:24.614634 | orchestrator | 2026-03-03 00:57:24 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:24.614670 | orchestrator | 2026-03-03 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:27.668718 | orchestrator | 2026-03-03 00:57:27 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:27.670972 | orchestrator | 2026-03-03 00:57:27 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:27.672895 | orchestrator | 2026-03-03 00:57:27 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:27.674530 | orchestrator | 2026-03-03 00:57:27 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:27.676234 | orchestrator | 2026-03-03 00:57:27 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:27.676539 | orchestrator | 2026-03-03 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:30.726637 | orchestrator | 2026-03-03 00:57:30 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:30.728965 | orchestrator | 2026-03-03 00:57:30 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:30.731574 | orchestrator | 2026-03-03 00:57:30 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:30.731633 | orchestrator | 2026-03-03 00:57:30 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:30.732371 | orchestrator | 2026-03-03 00:57:30 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:30.732404 | orchestrator | 2026-03-03 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:33.774856 | orchestrator | 2026-03-03 00:57:33 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:33.777475 | orchestrator | 2026-03-03 00:57:33 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:33.780048 | orchestrator | 2026-03-03 00:57:33 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:33.782896 | orchestrator | 2026-03-03 00:57:33 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:33.784440 | orchestrator | 2026-03-03 00:57:33 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:33.784495 | orchestrator | 2026-03-03 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:36.825887 | orchestrator | 2026-03-03 00:57:36 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:36.828461 | orchestrator | 2026-03-03 00:57:36 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:36.830238 | orchestrator | 2026-03-03 00:57:36 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:36.832060 | orchestrator | 2026-03-03 00:57:36 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:36.834519 | orchestrator | 2026-03-03 00:57:36 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:36.834903 | orchestrator | 2026-03-03 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:39.877962 | orchestrator | 2026-03-03 00:57:39 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:39.882896 | orchestrator | 2026-03-03 00:57:39 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:39.883644 | orchestrator | 2026-03-03 00:57:39 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:39.886502 | orchestrator | 2026-03-03 00:57:39 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:39.887277 | orchestrator | 2026-03-03 00:57:39 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:39.887466 | orchestrator | 2026-03-03 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:42.930185 | orchestrator | 2026-03-03 00:57:42 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:42.931313 | orchestrator | 2026-03-03 00:57:42 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:42.932420 | orchestrator | 2026-03-03 00:57:42 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:42.933351 | orchestrator | 2026-03-03 00:57:42 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:42.934407 | orchestrator | 2026-03-03 00:57:42 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:42.934445 | orchestrator | 2026-03-03 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:45.986632 | orchestrator | 2026-03-03 00:57:45 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:45.990184 | orchestrator | 2026-03-03 00:57:45 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:45.990634 | orchestrator | 2026-03-03 00:57:45 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:45.992343 | orchestrator | 2026-03-03 00:57:45 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:45.994590 | orchestrator | 2026-03-03 00:57:45 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:45.994845 | orchestrator | 2026-03-03 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:49.032691 | orchestrator | 2026-03-03 00:57:49 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:49.033209 | orchestrator | 2026-03-03 00:57:49 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:49.034045 | orchestrator | 2026-03-03 00:57:49 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:49.034907 | orchestrator | 2026-03-03 00:57:49 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:49.036223 | orchestrator | 2026-03-03 00:57:49 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:49.036274 | orchestrator | 2026-03-03 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:52.102671 | orchestrator | 2026-03-03 00:57:52 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:52.102726 | orchestrator | 2026-03-03 00:57:52 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:52.102735 | orchestrator | 2026-03-03 00:57:52 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:52.102742 | orchestrator | 2026-03-03 00:57:52 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:52.102748 | orchestrator | 2026-03-03 00:57:52 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:52.102755 | orchestrator | 2026-03-03 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:55.132787 | orchestrator | 2026-03-03 00:57:55 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:55.133275 | orchestrator | 2026-03-03 00:57:55 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:55.133311 | orchestrator | 2026-03-03 00:57:55 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:55.133580 | orchestrator | 2026-03-03 00:57:55 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:55.134245 | orchestrator | 2026-03-03 00:57:55 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:55.134270 | orchestrator | 2026-03-03 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:57:58.163583 | orchestrator | 2026-03-03 00:57:58 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:57:58.163697 | orchestrator | 2026-03-03 00:57:58 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:57:58.164506 | orchestrator | 2026-03-03 00:57:58 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:57:58.165004 | orchestrator | 2026-03-03 00:57:58 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:57:58.165583 | orchestrator | 2026-03-03 00:57:58 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:57:58.165616 | orchestrator | 2026-03-03 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:01.251512 | orchestrator | 2026-03-03 00:58:01 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:01.257163 | orchestrator | 2026-03-03 00:58:01 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:01.258116 | orchestrator | 2026-03-03 00:58:01 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:01.259201 | orchestrator | 2026-03-03 00:58:01 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:58:01.259981 | orchestrator | 2026-03-03 00:58:01 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:58:01.260023 | orchestrator | 2026-03-03 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:04.468441 | orchestrator | 2026-03-03 00:58:04 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:04.469111 | orchestrator | 2026-03-03 00:58:04 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:04.471339 | orchestrator | 2026-03-03 00:58:04 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:04.471418 | orchestrator | 2026-03-03 00:58:04 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:58:04.471600 | orchestrator | 2026-03-03 00:58:04 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:58:04.471616 | orchestrator | 2026-03-03 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:07.513726 | orchestrator | 2026-03-03 00:58:07 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:07.513806 | orchestrator | 2026-03-03 00:58:07 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:07.513812 | orchestrator | 2026-03-03 00:58:07 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:07.513817 | orchestrator | 2026-03-03 00:58:07 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:58:07.513821 | orchestrator | 2026-03-03 00:58:07 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:58:07.514282 | orchestrator | 2026-03-03 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:10.560749 | orchestrator | 2026-03-03 00:58:10 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:10.562105 | orchestrator | 2026-03-03 00:58:10 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:10.563349 | orchestrator | 2026-03-03 00:58:10 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:10.565531 | orchestrator | 2026-03-03 00:58:10 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state STARTED 2026-03-03 00:58:10.568257 | orchestrator | 2026-03-03 00:58:10 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state STARTED 2026-03-03 00:58:10.568332 | orchestrator | 2026-03-03 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:13.650477 | orchestrator | 2026-03-03 00:58:13 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:13.651290 | orchestrator | 2026-03-03 00:58:13 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:13.651334 | orchestrator | 2026-03-03 00:58:13 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:13.653717 | orchestrator | 2026-03-03 00:58:13.653771 | orchestrator | 2026-03-03 00:58:13 | INFO  | Task 66f24fc7-032a-4d7b-9de6-64ed407aac3a is in state SUCCESS 2026-03-03 00:58:13.655561 | orchestrator | 2026-03-03 00:58:13.655605 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-03 00:58:13.655613 | orchestrator | 2026-03-03 00:58:13.655618 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-03 00:58:13.655623 | orchestrator | Tuesday 03 March 2026 00:53:23 +0000 (0:00:00.150) 0:00:00.150 ********* 2026-03-03 00:58:13.655627 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:58:13.655632 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:58:13.655636 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:58:13.655640 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.655644 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.655648 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.655652 | orchestrator | 2026-03-03 00:58:13.655656 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-03 00:58:13.655660 | orchestrator | Tuesday 03 March 2026 00:53:24 +0000 (0:00:00.667) 0:00:00.818 ********* 2026-03-03 00:58:13.655678 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.655683 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.655687 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.655691 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.655694 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.655698 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.655702 | orchestrator | 2026-03-03 00:58:13.655706 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-03 00:58:13.655710 | orchestrator | Tuesday 03 March 2026 00:53:24 +0000 (0:00:00.658) 0:00:01.477 ********* 2026-03-03 00:58:13.655714 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.655718 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.655722 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.655726 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.655729 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.655733 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.655737 | orchestrator | 2026-03-03 00:58:13.655741 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-03 00:58:13.655745 | orchestrator | Tuesday 03 March 2026 00:53:25 +0000 (0:00:00.839) 0:00:02.317 ********* 2026-03-03 00:58:13.655749 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.655761 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.655767 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.655777 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.655783 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.655790 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.655796 | orchestrator | 2026-03-03 00:58:13.655802 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-03 00:58:13.655808 | orchestrator | Tuesday 03 March 2026 00:53:27 +0000 (0:00:02.171) 0:00:04.488 ********* 2026-03-03 00:58:13.655814 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.655820 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.655826 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.655832 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.655838 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.655844 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.655851 | orchestrator | 2026-03-03 00:58:13.655856 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-03 00:58:13.655863 | orchestrator | Tuesday 03 March 2026 00:53:28 +0000 (0:00:01.180) 0:00:05.668 ********* 2026-03-03 00:58:13.655868 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.655874 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.655879 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.655886 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.655892 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.655898 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.655903 | orchestrator | 2026-03-03 00:58:13.655909 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-03 00:58:13.655915 | orchestrator | Tuesday 03 March 2026 00:53:30 +0000 (0:00:01.230) 0:00:06.899 ********* 2026-03-03 00:58:13.655921 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.655927 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.655934 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.655940 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.655947 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.655953 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.655957 | orchestrator | 2026-03-03 00:58:13.655960 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-03 00:58:13.655964 | orchestrator | Tuesday 03 March 2026 00:53:31 +0000 (0:00:00.963) 0:00:07.862 ********* 2026-03-03 00:58:13.655968 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.655972 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.655977 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.655991 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656028 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656034 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656040 | orchestrator | 2026-03-03 00:58:13.656046 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-03 00:58:13.656052 | orchestrator | Tuesday 03 March 2026 00:53:31 +0000 (0:00:00.785) 0:00:08.647 ********* 2026-03-03 00:58:13.656058 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 00:58:13.656064 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 00:58:13.656070 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656076 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 00:58:13.656082 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 00:58:13.656087 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656093 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 00:58:13.656099 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 00:58:13.656106 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656112 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 00:58:13.656130 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 00:58:13.656137 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 00:58:13.656144 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 00:58:13.656150 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656154 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656160 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 00:58:13.656167 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 00:58:13.656173 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656179 | orchestrator | 2026-03-03 00:58:13.656186 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-03 00:58:13.656192 | orchestrator | Tuesday 03 March 2026 00:53:32 +0000 (0:00:00.622) 0:00:09.270 ********* 2026-03-03 00:58:13.656199 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656205 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656211 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656218 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656224 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656232 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656239 | orchestrator | 2026-03-03 00:58:13.656247 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-03 00:58:13.656255 | orchestrator | Tuesday 03 March 2026 00:53:33 +0000 (0:00:01.057) 0:00:10.328 ********* 2026-03-03 00:58:13.656262 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:58:13.656268 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:58:13.656275 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:58:13.656281 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.656287 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.656293 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.656300 | orchestrator | 2026-03-03 00:58:13.656306 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-03 00:58:13.656314 | orchestrator | Tuesday 03 March 2026 00:53:34 +0000 (0:00:01.184) 0:00:11.512 ********* 2026-03-03 00:58:13.656320 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.656333 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.656341 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.656347 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.656355 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.656368 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.656375 | orchestrator | 2026-03-03 00:58:13.656381 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-03 00:58:13.656388 | orchestrator | Tuesday 03 March 2026 00:53:40 +0000 (0:00:05.526) 0:00:17.038 ********* 2026-03-03 00:58:13.656394 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656401 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656407 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656414 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656420 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656427 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656433 | orchestrator | 2026-03-03 00:58:13.656440 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-03 00:58:13.656446 | orchestrator | Tuesday 03 March 2026 00:53:41 +0000 (0:00:01.436) 0:00:18.474 ********* 2026-03-03 00:58:13.656453 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656459 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656466 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656472 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656478 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656484 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656491 | orchestrator | 2026-03-03 00:58:13.656497 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-03 00:58:13.656506 | orchestrator | Tuesday 03 March 2026 00:53:43 +0000 (0:00:01.345) 0:00:19.820 ********* 2026-03-03 00:58:13.656513 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656519 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656525 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656532 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656538 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656544 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656551 | orchestrator | 2026-03-03 00:58:13.656557 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-03 00:58:13.656563 | orchestrator | Tuesday 03 March 2026 00:53:44 +0000 (0:00:01.109) 0:00:20.930 ********* 2026-03-03 00:58:13.656570 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-03 00:58:13.656576 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-03 00:58:13.656582 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656589 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-03 00:58:13.656595 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-03 00:58:13.656601 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656608 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-03 00:58:13.656614 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-03 00:58:13.656620 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656626 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-03 00:58:13.656632 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-03 00:58:13.656638 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656642 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-03 00:58:13.656646 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-03 00:58:13.656649 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656653 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-03 00:58:13.656657 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-03 00:58:13.656661 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656665 | orchestrator | 2026-03-03 00:58:13.656669 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-03 00:58:13.656677 | orchestrator | Tuesday 03 March 2026 00:53:46 +0000 (0:00:02.328) 0:00:23.258 ********* 2026-03-03 00:58:13.656681 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656689 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656693 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656697 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656700 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656704 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656708 | orchestrator | 2026-03-03 00:58:13.656712 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-03 00:58:13.656716 | orchestrator | Tuesday 03 March 2026 00:53:47 +0000 (0:00:00.754) 0:00:24.013 ********* 2026-03-03 00:58:13.656720 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.656724 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.656728 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.656732 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656736 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656740 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656744 | orchestrator | 2026-03-03 00:58:13.656748 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-03 00:58:13.656751 | orchestrator | 2026-03-03 00:58:13.656755 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-03 00:58:13.656759 | orchestrator | Tuesday 03 March 2026 00:53:49 +0000 (0:00:01.896) 0:00:25.910 ********* 2026-03-03 00:58:13.656763 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.656767 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.656771 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.656775 | orchestrator | 2026-03-03 00:58:13.656779 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-03 00:58:13.656783 | orchestrator | Tuesday 03 March 2026 00:53:50 +0000 (0:00:01.424) 0:00:27.334 ********* 2026-03-03 00:58:13.656786 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.656790 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.656794 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.656798 | orchestrator | 2026-03-03 00:58:13.656802 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-03 00:58:13.656806 | orchestrator | Tuesday 03 March 2026 00:53:52 +0000 (0:00:01.518) 0:00:28.852 ********* 2026-03-03 00:58:13.656813 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.656817 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.656820 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.656824 | orchestrator | 2026-03-03 00:58:13.656828 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-03 00:58:13.656832 | orchestrator | Tuesday 03 March 2026 00:53:52 +0000 (0:00:00.876) 0:00:29.729 ********* 2026-03-03 00:58:13.656836 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.656840 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.656843 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.656847 | orchestrator | 2026-03-03 00:58:13.656851 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-03 00:58:13.656855 | orchestrator | Tuesday 03 March 2026 00:53:53 +0000 (0:00:00.791) 0:00:30.520 ********* 2026-03-03 00:58:13.656859 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.656863 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.656869 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.656876 | orchestrator | 2026-03-03 00:58:13.656882 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-03 00:58:13.656888 | orchestrator | Tuesday 03 March 2026 00:53:54 +0000 (0:00:00.258) 0:00:30.779 ********* 2026-03-03 00:58:13.656893 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.656899 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.656905 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.656911 | orchestrator | 2026-03-03 00:58:13.656917 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-03 00:58:13.656923 | orchestrator | Tuesday 03 March 2026 00:53:55 +0000 (0:00:01.138) 0:00:31.917 ********* 2026-03-03 00:58:13.656929 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.656940 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.656946 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.656952 | orchestrator | 2026-03-03 00:58:13.656958 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-03 00:58:13.656964 | orchestrator | Tuesday 03 March 2026 00:53:56 +0000 (0:00:01.621) 0:00:33.538 ********* 2026-03-03 00:58:13.656970 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:58:13.656977 | orchestrator | 2026-03-03 00:58:13.656984 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-03 00:58:13.656989 | orchestrator | Tuesday 03 March 2026 00:53:57 +0000 (0:00:00.645) 0:00:34.184 ********* 2026-03-03 00:58:13.657008 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.657015 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.657018 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.657022 | orchestrator | 2026-03-03 00:58:13.657026 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-03 00:58:13.657030 | orchestrator | Tuesday 03 March 2026 00:54:00 +0000 (0:00:02.617) 0:00:36.802 ********* 2026-03-03 00:58:13.657034 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.657038 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.657042 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657046 | orchestrator | 2026-03-03 00:58:13.657049 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-03 00:58:13.657053 | orchestrator | Tuesday 03 March 2026 00:54:01 +0000 (0:00:01.219) 0:00:38.022 ********* 2026-03-03 00:58:13.657057 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.657108 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.657112 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657116 | orchestrator | 2026-03-03 00:58:13.657120 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-03 00:58:13.657124 | orchestrator | Tuesday 03 March 2026 00:54:03 +0000 (0:00:01.912) 0:00:39.934 ********* 2026-03-03 00:58:13.657128 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.657132 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.657136 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657140 | orchestrator | 2026-03-03 00:58:13.657143 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-03 00:58:13.657152 | orchestrator | Tuesday 03 March 2026 00:54:05 +0000 (0:00:02.302) 0:00:42.237 ********* 2026-03-03 00:58:13.657156 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.657160 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.657163 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.657167 | orchestrator | 2026-03-03 00:58:13.657171 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-03 00:58:13.657175 | orchestrator | Tuesday 03 March 2026 00:54:06 +0000 (0:00:01.064) 0:00:43.301 ********* 2026-03-03 00:58:13.657179 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.657183 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.657187 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.657193 | orchestrator | 2026-03-03 00:58:13.657199 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-03 00:58:13.657206 | orchestrator | Tuesday 03 March 2026 00:54:06 +0000 (0:00:00.425) 0:00:43.727 ********* 2026-03-03 00:58:13.657213 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657219 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.657225 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.657231 | orchestrator | 2026-03-03 00:58:13.657238 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-03 00:58:13.657243 | orchestrator | Tuesday 03 March 2026 00:54:08 +0000 (0:00:01.818) 0:00:45.545 ********* 2026-03-03 00:58:13.657249 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.657255 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.657261 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.657275 | orchestrator | 2026-03-03 00:58:13.657282 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-03 00:58:13.657288 | orchestrator | Tuesday 03 March 2026 00:54:11 +0000 (0:00:02.334) 0:00:47.879 ********* 2026-03-03 00:58:13.657295 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.657301 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.657307 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.657313 | orchestrator | 2026-03-03 00:58:13.657319 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-03 00:58:13.657326 | orchestrator | Tuesday 03 March 2026 00:54:11 +0000 (0:00:00.572) 0:00:48.451 ********* 2026-03-03 00:58:13.657333 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-03 00:58:13.657341 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-03 00:58:13.657348 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-03 00:58:13.657355 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-03 00:58:13.657361 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-03 00:58:13.657381 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-03 00:58:13.657388 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-03 00:58:13.657396 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-03 00:58:13.657403 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-03 00:58:13.657407 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-03 00:58:13.657413 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-03 00:58:13.657420 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-03 00:58:13.657426 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-03 00:58:13.657431 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-03 00:58:13.657437 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-03 00:58:13.657442 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.657448 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.657454 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.657459 | orchestrator | 2026-03-03 00:58:13.657466 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-03 00:58:13.657471 | orchestrator | Tuesday 03 March 2026 00:55:05 +0000 (0:00:53.515) 0:01:41.966 ********* 2026-03-03 00:58:13.657476 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.657482 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.657487 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.657493 | orchestrator | 2026-03-03 00:58:13.657499 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-03 00:58:13.657510 | orchestrator | Tuesday 03 March 2026 00:55:05 +0000 (0:00:00.297) 0:01:42.264 ********* 2026-03-03 00:58:13.657523 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657529 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.657535 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.657540 | orchestrator | 2026-03-03 00:58:13.657546 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-03 00:58:13.657552 | orchestrator | Tuesday 03 March 2026 00:55:06 +0000 (0:00:01.097) 0:01:43.361 ********* 2026-03-03 00:58:13.657558 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657565 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.657571 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.657577 | orchestrator | 2026-03-03 00:58:13.657584 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-03 00:58:13.657590 | orchestrator | Tuesday 03 March 2026 00:55:07 +0000 (0:00:01.399) 0:01:44.761 ********* 2026-03-03 00:58:13.657596 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.657603 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.657608 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.657611 | orchestrator | 2026-03-03 00:58:13.657616 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-03 00:58:13.657620 | orchestrator | Tuesday 03 March 2026 00:55:49 +0000 (0:00:41.300) 0:02:26.062 ********* 2026-03-03 00:58:13.657624 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.658303 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.658344 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.658352 | orchestrator | 2026-03-03 00:58:13.658359 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-03 00:58:13.658367 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.738) 0:02:26.800 ********* 2026-03-03 00:58:13.658373 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.658380 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.658387 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.658393 | orchestrator | 2026-03-03 00:58:13.658400 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-03 00:58:13.658408 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.706) 0:02:27.506 ********* 2026-03-03 00:58:13.658415 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.658422 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.658429 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.658436 | orchestrator | 2026-03-03 00:58:13.658443 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-03 00:58:13.658449 | orchestrator | Tuesday 03 March 2026 00:55:51 +0000 (0:00:00.828) 0:02:28.334 ********* 2026-03-03 00:58:13.658456 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.658463 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.658470 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.658476 | orchestrator | 2026-03-03 00:58:13.658486 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-03 00:58:13.658493 | orchestrator | Tuesday 03 March 2026 00:55:52 +0000 (0:00:01.115) 0:02:29.449 ********* 2026-03-03 00:58:13.658500 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.658507 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.658514 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.658521 | orchestrator | 2026-03-03 00:58:13.658527 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-03 00:58:13.658534 | orchestrator | Tuesday 03 March 2026 00:55:53 +0000 (0:00:00.384) 0:02:29.834 ********* 2026-03-03 00:58:13.658541 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.658548 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.658555 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.658561 | orchestrator | 2026-03-03 00:58:13.658568 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-03 00:58:13.658575 | orchestrator | Tuesday 03 March 2026 00:55:53 +0000 (0:00:00.802) 0:02:30.636 ********* 2026-03-03 00:58:13.658582 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.658598 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.658605 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.658612 | orchestrator | 2026-03-03 00:58:13.658619 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-03 00:58:13.658625 | orchestrator | Tuesday 03 March 2026 00:55:54 +0000 (0:00:00.730) 0:02:31.367 ********* 2026-03-03 00:58:13.658632 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.658639 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.658645 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.658652 | orchestrator | 2026-03-03 00:58:13.658659 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-03 00:58:13.658666 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:01.455) 0:02:32.822 ********* 2026-03-03 00:58:13.658672 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.658679 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.658686 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.658693 | orchestrator | 2026-03-03 00:58:13.658700 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-03 00:58:13.658707 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.959) 0:02:33.782 ********* 2026-03-03 00:58:13.658713 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.658720 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.658727 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.658733 | orchestrator | 2026-03-03 00:58:13.658740 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-03 00:58:13.658747 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.343) 0:02:34.125 ********* 2026-03-03 00:58:13.658754 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.658760 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.658767 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.658774 | orchestrator | 2026-03-03 00:58:13.658780 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-03 00:58:13.658787 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.304) 0:02:34.430 ********* 2026-03-03 00:58:13.658794 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.658801 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.658808 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.658815 | orchestrator | 2026-03-03 00:58:13.658822 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-03 00:58:13.658828 | orchestrator | Tuesday 03 March 2026 00:55:58 +0000 (0:00:00.865) 0:02:35.295 ********* 2026-03-03 00:58:13.658835 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.658853 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.658860 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.658867 | orchestrator | 2026-03-03 00:58:13.658873 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-03 00:58:13.658881 | orchestrator | Tuesday 03 March 2026 00:55:59 +0000 (0:00:00.735) 0:02:36.030 ********* 2026-03-03 00:58:13.658887 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-03 00:58:13.658894 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-03 00:58:13.658900 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-03 00:58:13.658906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-03 00:58:13.658912 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-03 00:58:13.658918 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-03 00:58:13.658924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-03 00:58:13.658930 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-03 00:58:13.658941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-03 00:58:13.658947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-03 00:58:13.658953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-03 00:58:13.658959 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-03 00:58:13.658964 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-03 00:58:13.658969 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-03 00:58:13.658982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-03 00:58:13.658988 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-03 00:58:13.659008 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-03 00:58:13.659019 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-03 00:58:13.659024 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-03 00:58:13.659029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-03 00:58:13.659035 | orchestrator | 2026-03-03 00:58:13.659041 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-03 00:58:13.659047 | orchestrator | 2026-03-03 00:58:13.659053 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-03 00:58:13.659058 | orchestrator | Tuesday 03 March 2026 00:56:02 +0000 (0:00:03.581) 0:02:39.612 ********* 2026-03-03 00:58:13.659064 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:58:13.659070 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:58:13.659075 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:58:13.659080 | orchestrator | 2026-03-03 00:58:13.659086 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-03 00:58:13.659091 | orchestrator | Tuesday 03 March 2026 00:56:03 +0000 (0:00:00.601) 0:02:40.213 ********* 2026-03-03 00:58:13.659097 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:58:13.659103 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:58:13.659109 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:58:13.659114 | orchestrator | 2026-03-03 00:58:13.659119 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-03 00:58:13.659125 | orchestrator | Tuesday 03 March 2026 00:56:04 +0000 (0:00:00.645) 0:02:40.859 ********* 2026-03-03 00:58:13.659131 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:58:13.659137 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:58:13.659143 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:58:13.659149 | orchestrator | 2026-03-03 00:58:13.659155 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-03 00:58:13.659160 | orchestrator | Tuesday 03 March 2026 00:56:04 +0000 (0:00:00.451) 0:02:41.311 ********* 2026-03-03 00:58:13.659166 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 00:58:13.659172 | orchestrator | 2026-03-03 00:58:13.659177 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-03 00:58:13.659183 | orchestrator | Tuesday 03 March 2026 00:56:05 +0000 (0:00:00.710) 0:02:42.021 ********* 2026-03-03 00:58:13.659188 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.659193 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.659199 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.659205 | orchestrator | 2026-03-03 00:58:13.659211 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-03 00:58:13.659217 | orchestrator | Tuesday 03 March 2026 00:56:05 +0000 (0:00:00.471) 0:02:42.493 ********* 2026-03-03 00:58:13.659229 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.659235 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.659242 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.659247 | orchestrator | 2026-03-03 00:58:13.659253 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-03 00:58:13.659264 | orchestrator | Tuesday 03 March 2026 00:56:06 +0000 (0:00:00.426) 0:02:42.920 ********* 2026-03-03 00:58:13.659271 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.659277 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.659283 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.659289 | orchestrator | 2026-03-03 00:58:13.659295 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-03 00:58:13.659301 | orchestrator | Tuesday 03 March 2026 00:56:06 +0000 (0:00:00.295) 0:02:43.215 ********* 2026-03-03 00:58:13.659308 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.659314 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.659320 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.659327 | orchestrator | 2026-03-03 00:58:13.659333 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-03 00:58:13.659340 | orchestrator | Tuesday 03 March 2026 00:56:07 +0000 (0:00:00.753) 0:02:43.968 ********* 2026-03-03 00:58:13.659346 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.659352 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.659359 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.659365 | orchestrator | 2026-03-03 00:58:13.659371 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-03 00:58:13.659377 | orchestrator | Tuesday 03 March 2026 00:56:08 +0000 (0:00:01.052) 0:02:45.020 ********* 2026-03-03 00:58:13.659383 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.659389 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.659396 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.659402 | orchestrator | 2026-03-03 00:58:13.659408 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-03 00:58:13.659414 | orchestrator | Tuesday 03 March 2026 00:56:09 +0000 (0:00:01.436) 0:02:46.457 ********* 2026-03-03 00:58:13.659420 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:58:13.659426 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:58:13.659432 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:58:13.659437 | orchestrator | 2026-03-03 00:58:13.659443 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-03 00:58:13.659449 | orchestrator | 2026-03-03 00:58:13.659454 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-03 00:58:13.659460 | orchestrator | Tuesday 03 March 2026 00:56:20 +0000 (0:00:10.517) 0:02:56.975 ********* 2026-03-03 00:58:13.659465 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.659471 | orchestrator | 2026-03-03 00:58:13.659477 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-03 00:58:13.659483 | orchestrator | Tuesday 03 March 2026 00:56:20 +0000 (0:00:00.725) 0:02:57.700 ********* 2026-03-03 00:58:13.659489 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659495 | orchestrator | 2026-03-03 00:58:13.659508 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-03 00:58:13.659515 | orchestrator | Tuesday 03 March 2026 00:56:21 +0000 (0:00:00.481) 0:02:58.182 ********* 2026-03-03 00:58:13.659521 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-03 00:58:13.659527 | orchestrator | 2026-03-03 00:58:13.659534 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-03 00:58:13.659540 | orchestrator | Tuesday 03 March 2026 00:56:21 +0000 (0:00:00.551) 0:02:58.734 ********* 2026-03-03 00:58:13.659546 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659552 | orchestrator | 2026-03-03 00:58:13.659559 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-03 00:58:13.659565 | orchestrator | Tuesday 03 March 2026 00:56:22 +0000 (0:00:01.026) 0:02:59.760 ********* 2026-03-03 00:58:13.659577 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659583 | orchestrator | 2026-03-03 00:58:13.659588 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-03 00:58:13.659595 | orchestrator | Tuesday 03 March 2026 00:56:23 +0000 (0:00:00.767) 0:03:00.528 ********* 2026-03-03 00:58:13.659602 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-03 00:58:13.659608 | orchestrator | 2026-03-03 00:58:13.659614 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-03 00:58:13.659620 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:01.627) 0:03:02.155 ********* 2026-03-03 00:58:13.659627 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-03 00:58:13.659633 | orchestrator | 2026-03-03 00:58:13.659639 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-03 00:58:13.659645 | orchestrator | Tuesday 03 March 2026 00:56:26 +0000 (0:00:00.893) 0:03:03.049 ********* 2026-03-03 00:58:13.659651 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659657 | orchestrator | 2026-03-03 00:58:13.659663 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-03 00:58:13.659670 | orchestrator | Tuesday 03 March 2026 00:56:26 +0000 (0:00:00.481) 0:03:03.530 ********* 2026-03-03 00:58:13.659676 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659682 | orchestrator | 2026-03-03 00:58:13.659688 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-03 00:58:13.659695 | orchestrator | 2026-03-03 00:58:13.659701 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-03 00:58:13.659707 | orchestrator | Tuesday 03 March 2026 00:56:27 +0000 (0:00:00.436) 0:03:03.967 ********* 2026-03-03 00:58:13.659714 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.659720 | orchestrator | 2026-03-03 00:58:13.659726 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-03 00:58:13.659732 | orchestrator | Tuesday 03 March 2026 00:56:27 +0000 (0:00:00.147) 0:03:04.114 ********* 2026-03-03 00:58:13.659738 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-03 00:58:13.659745 | orchestrator | 2026-03-03 00:58:13.659751 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-03 00:58:13.659757 | orchestrator | Tuesday 03 March 2026 00:56:27 +0000 (0:00:00.171) 0:03:04.286 ********* 2026-03-03 00:58:13.659763 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.659769 | orchestrator | 2026-03-03 00:58:13.659776 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-03 00:58:13.659782 | orchestrator | Tuesday 03 March 2026 00:56:28 +0000 (0:00:00.766) 0:03:05.052 ********* 2026-03-03 00:58:13.659794 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.659801 | orchestrator | 2026-03-03 00:58:13.659807 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-03 00:58:13.659813 | orchestrator | Tuesday 03 March 2026 00:56:29 +0000 (0:00:01.437) 0:03:06.490 ********* 2026-03-03 00:58:13.659819 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659826 | orchestrator | 2026-03-03 00:58:13.659832 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-03 00:58:13.659838 | orchestrator | Tuesday 03 March 2026 00:56:30 +0000 (0:00:00.724) 0:03:07.214 ********* 2026-03-03 00:58:13.659842 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.659846 | orchestrator | 2026-03-03 00:58:13.659849 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-03 00:58:13.659853 | orchestrator | Tuesday 03 March 2026 00:56:30 +0000 (0:00:00.422) 0:03:07.637 ********* 2026-03-03 00:58:13.659857 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659861 | orchestrator | 2026-03-03 00:58:13.659864 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-03 00:58:13.659868 | orchestrator | Tuesday 03 March 2026 00:56:38 +0000 (0:00:07.435) 0:03:15.073 ********* 2026-03-03 00:58:13.659876 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.659880 | orchestrator | 2026-03-03 00:58:13.659883 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-03 00:58:13.659887 | orchestrator | Tuesday 03 March 2026 00:56:51 +0000 (0:00:12.907) 0:03:27.980 ********* 2026-03-03 00:58:13.659891 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.659895 | orchestrator | 2026-03-03 00:58:13.659899 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-03 00:58:13.659902 | orchestrator | 2026-03-03 00:58:13.659906 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-03 00:58:13.659910 | orchestrator | Tuesday 03 March 2026 00:56:51 +0000 (0:00:00.512) 0:03:28.493 ********* 2026-03-03 00:58:13.659914 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.659918 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.659922 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.659925 | orchestrator | 2026-03-03 00:58:13.659929 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-03 00:58:13.659933 | orchestrator | Tuesday 03 March 2026 00:56:51 +0000 (0:00:00.261) 0:03:28.755 ********* 2026-03-03 00:58:13.659937 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.659941 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.659945 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.659949 | orchestrator | 2026-03-03 00:58:13.659953 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-03 00:58:13.659960 | orchestrator | Tuesday 03 March 2026 00:56:52 +0000 (0:00:00.292) 0:03:29.047 ********* 2026-03-03 00:58:13.659964 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:58:13.659968 | orchestrator | 2026-03-03 00:58:13.659972 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-03 00:58:13.659976 | orchestrator | Tuesday 03 March 2026 00:56:52 +0000 (0:00:00.596) 0:03:29.644 ********* 2026-03-03 00:58:13.659979 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.659983 | orchestrator | 2026-03-03 00:58:13.659987 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-03 00:58:13.659991 | orchestrator | Tuesday 03 March 2026 00:56:53 +0000 (0:00:00.784) 0:03:30.428 ********* 2026-03-03 00:58:13.660038 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.660043 | orchestrator | 2026-03-03 00:58:13.660047 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-03 00:58:13.660051 | orchestrator | Tuesday 03 March 2026 00:56:54 +0000 (0:00:00.831) 0:03:31.260 ********* 2026-03-03 00:58:13.660055 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660059 | orchestrator | 2026-03-03 00:58:13.660063 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-03 00:58:13.660067 | orchestrator | Tuesday 03 March 2026 00:56:54 +0000 (0:00:00.126) 0:03:31.386 ********* 2026-03-03 00:58:13.660071 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.660074 | orchestrator | 2026-03-03 00:58:13.660078 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-03 00:58:13.660082 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:00.904) 0:03:32.290 ********* 2026-03-03 00:58:13.660086 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660090 | orchestrator | 2026-03-03 00:58:13.660093 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-03 00:58:13.660097 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:00.102) 0:03:32.393 ********* 2026-03-03 00:58:13.660101 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660105 | orchestrator | 2026-03-03 00:58:13.660109 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-03 00:58:13.660113 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:00.101) 0:03:32.495 ********* 2026-03-03 00:58:13.660117 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660121 | orchestrator | 2026-03-03 00:58:13.660128 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-03 00:58:13.660132 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:00.127) 0:03:32.623 ********* 2026-03-03 00:58:13.660136 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660140 | orchestrator | 2026-03-03 00:58:13.660144 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-03 00:58:13.660147 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:00.096) 0:03:32.719 ********* 2026-03-03 00:58:13.660151 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.660156 | orchestrator | 2026-03-03 00:58:13.660162 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-03 00:58:13.660168 | orchestrator | Tuesday 03 March 2026 00:57:01 +0000 (0:00:05.143) 0:03:37.862 ********* 2026-03-03 00:58:13.660174 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-03 00:58:13.660184 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-03 00:58:13.660191 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-03 00:58:13.660197 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-03 00:58:13.660202 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-03 00:58:13.660208 | orchestrator | 2026-03-03 00:58:13.660213 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-03 00:58:13.660220 | orchestrator | Tuesday 03 March 2026 00:57:47 +0000 (0:00:46.380) 0:04:24.242 ********* 2026-03-03 00:58:13.660225 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.660231 | orchestrator | 2026-03-03 00:58:13.660237 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-03 00:58:13.660243 | orchestrator | Tuesday 03 March 2026 00:57:48 +0000 (0:00:01.106) 0:04:25.349 ********* 2026-03-03 00:58:13.660249 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.660254 | orchestrator | 2026-03-03 00:58:13.660261 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-03 00:58:13.660267 | orchestrator | Tuesday 03 March 2026 00:57:50 +0000 (0:00:01.791) 0:04:27.141 ********* 2026-03-03 00:58:13.660274 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-03 00:58:13.660280 | orchestrator | 2026-03-03 00:58:13.660287 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-03 00:58:13.660293 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:00.989) 0:04:28.131 ********* 2026-03-03 00:58:13.660301 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660305 | orchestrator | 2026-03-03 00:58:13.660309 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-03 00:58:13.660313 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:00.167) 0:04:28.299 ********* 2026-03-03 00:58:13.660317 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-03 00:58:13.660321 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-03 00:58:13.660325 | orchestrator | 2026-03-03 00:58:13.660328 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-03 00:58:13.660332 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:01.802) 0:04:30.102 ********* 2026-03-03 00:58:13.660336 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660340 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.660344 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.660347 | orchestrator | 2026-03-03 00:58:13.660356 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-03 00:58:13.660360 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:00.290) 0:04:30.393 ********* 2026-03-03 00:58:13.660364 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.660368 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.660371 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.660379 | orchestrator | 2026-03-03 00:58:13.660383 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-03 00:58:13.660387 | orchestrator | 2026-03-03 00:58:13.660391 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-03 00:58:13.660395 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.923) 0:04:31.317 ********* 2026-03-03 00:58:13.660399 | orchestrator | ok: [testbed-manager] 2026-03-03 00:58:13.660403 | orchestrator | 2026-03-03 00:58:13.660407 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-03 00:58:13.660410 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.116) 0:04:31.433 ********* 2026-03-03 00:58:13.660414 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-03 00:58:13.660418 | orchestrator | 2026-03-03 00:58:13.660422 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-03 00:58:13.660426 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.232) 0:04:31.666 ********* 2026-03-03 00:58:13.660430 | orchestrator | changed: [testbed-manager] 2026-03-03 00:58:13.660434 | orchestrator | 2026-03-03 00:58:13.660438 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-03 00:58:13.660442 | orchestrator | 2026-03-03 00:58:13.660445 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-03 00:58:13.660449 | orchestrator | Tuesday 03 March 2026 00:57:59 +0000 (0:00:04.709) 0:04:36.376 ********* 2026-03-03 00:58:13.660453 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:58:13.660457 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:58:13.660461 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:58:13.660465 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.660468 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.660472 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.660476 | orchestrator | 2026-03-03 00:58:13.660480 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-03 00:58:13.660484 | orchestrator | Tuesday 03 March 2026 00:58:00 +0000 (0:00:00.839) 0:04:37.216 ********* 2026-03-03 00:58:13.660488 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-03 00:58:13.660492 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-03 00:58:13.660495 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-03 00:58:13.660499 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-03 00:58:13.660503 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-03 00:58:13.660507 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-03 00:58:13.660511 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-03 00:58:13.660515 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-03 00:58:13.660523 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-03 00:58:13.660527 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-03 00:58:13.660531 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-03 00:58:13.660535 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-03 00:58:13.660538 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-03 00:58:13.660542 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-03 00:58:13.660546 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-03 00:58:13.660550 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-03 00:58:13.660559 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-03 00:58:13.660563 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-03 00:58:13.660567 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-03 00:58:13.660571 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-03 00:58:13.660574 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-03 00:58:13.660578 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-03 00:58:13.660582 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-03 00:58:13.660586 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-03 00:58:13.660589 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-03 00:58:13.660593 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-03 00:58:13.660597 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-03 00:58:13.660603 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-03 00:58:13.660607 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-03 00:58:13.660611 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-03 00:58:13.660615 | orchestrator | 2026-03-03 00:58:13.660619 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-03 00:58:13.660623 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:11.637) 0:04:48.853 ********* 2026-03-03 00:58:13.660627 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.660630 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.660634 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.660638 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.660642 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.660646 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660650 | orchestrator | 2026-03-03 00:58:13.660653 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-03 00:58:13.660657 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:00.663) 0:04:49.517 ********* 2026-03-03 00:58:13.660661 | orchestrator | skipping: [testbed-node-3] 2026-03-03 00:58:13.660665 | orchestrator | skipping: [testbed-node-4] 2026-03-03 00:58:13.660669 | orchestrator | skipping: [testbed-node-5] 2026-03-03 00:58:13.660673 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.660677 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.660681 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.660684 | orchestrator | 2026-03-03 00:58:13.660688 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:58:13.660692 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:58:13.660699 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-03 00:58:13.660703 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-03 00:58:13.660707 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-03 00:58:13.660711 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-03 00:58:13.660715 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-03 00:58:13.660723 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-03 00:58:13.660727 | orchestrator | 2026-03-03 00:58:13.660731 | orchestrator | 2026-03-03 00:58:13.660735 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:58:13.660742 | orchestrator | Tuesday 03 March 2026 00:58:13 +0000 (0:00:00.534) 0:04:50.051 ********* 2026-03-03 00:58:13.660746 | orchestrator | =============================================================================== 2026-03-03 00:58:13.660750 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.52s 2026-03-03 00:58:13.660754 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 46.38s 2026-03-03 00:58:13.660758 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 41.30s 2026-03-03 00:58:13.660762 | orchestrator | kubectl : Install required packages ------------------------------------ 12.91s 2026-03-03 00:58:13.660765 | orchestrator | Manage labels ---------------------------------------------------------- 11.64s 2026-03-03 00:58:13.660769 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.52s 2026-03-03 00:58:13.660774 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.44s 2026-03-03 00:58:13.660777 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.53s 2026-03-03 00:58:13.660781 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.14s 2026-03-03 00:58:13.660785 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.71s 2026-03-03 00:58:13.660789 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.58s 2026-03-03 00:58:13.660793 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.62s 2026-03-03 00:58:13.660797 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.33s 2026-03-03 00:58:13.660801 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.33s 2026-03-03 00:58:13.660805 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.30s 2026-03-03 00:58:13.660808 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.17s 2026-03-03 00:58:13.660812 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.91s 2026-03-03 00:58:13.660816 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.90s 2026-03-03 00:58:13.660820 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.82s 2026-03-03 00:58:13.660826 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.80s 2026-03-03 00:58:13.660830 | orchestrator | 2026-03-03 00:58:13.660834 | orchestrator | 2026-03-03 00:58:13 | INFO  | Task 3b6b72bf-1096-4d98-b5d9-c536378ba656 is in state SUCCESS 2026-03-03 00:58:13.660838 | orchestrator | 2026-03-03 00:58:13.660842 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-03 00:58:13.660845 | orchestrator | 2026-03-03 00:58:13.660849 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-03 00:58:13.660853 | orchestrator | Tuesday 03 March 2026 00:56:00 +0000 (0:00:00.090) 0:00:00.090 ********* 2026-03-03 00:58:13.660857 | orchestrator | ok: [localhost] => { 2026-03-03 00:58:13.660861 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-03 00:58:13.660865 | orchestrator | } 2026-03-03 00:58:13.660869 | orchestrator | 2026-03-03 00:58:13.660875 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-03 00:58:13.660881 | orchestrator | Tuesday 03 March 2026 00:56:00 +0000 (0:00:00.052) 0:00:00.143 ********* 2026-03-03 00:58:13.660891 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-03 00:58:13.660899 | orchestrator | ...ignoring 2026-03-03 00:58:13.660905 | orchestrator | 2026-03-03 00:58:13.660911 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-03 00:58:13.660916 | orchestrator | Tuesday 03 March 2026 00:56:03 +0000 (0:00:03.163) 0:00:03.306 ********* 2026-03-03 00:58:13.660922 | orchestrator | skipping: [localhost] 2026-03-03 00:58:13.660928 | orchestrator | 2026-03-03 00:58:13.660934 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-03 00:58:13.660940 | orchestrator | Tuesday 03 March 2026 00:56:03 +0000 (0:00:00.049) 0:00:03.355 ********* 2026-03-03 00:58:13.660946 | orchestrator | ok: [localhost] 2026-03-03 00:58:13.660952 | orchestrator | 2026-03-03 00:58:13.660958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:58:13.660964 | orchestrator | 2026-03-03 00:58:13.660970 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 00:58:13.660976 | orchestrator | Tuesday 03 March 2026 00:56:03 +0000 (0:00:00.154) 0:00:03.510 ********* 2026-03-03 00:58:13.660982 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.660989 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.661017 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.661021 | orchestrator | 2026-03-03 00:58:13.661025 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 00:58:13.661029 | orchestrator | Tuesday 03 March 2026 00:56:04 +0000 (0:00:00.437) 0:00:03.948 ********* 2026-03-03 00:58:13.661033 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-03 00:58:13.661037 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-03 00:58:13.661041 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-03 00:58:13.661045 | orchestrator | 2026-03-03 00:58:13.661049 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-03 00:58:13.661052 | orchestrator | 2026-03-03 00:58:13.661056 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-03 00:58:13.661060 | orchestrator | Tuesday 03 March 2026 00:56:05 +0000 (0:00:00.914) 0:00:04.862 ********* 2026-03-03 00:58:13.661069 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:58:13.661073 | orchestrator | 2026-03-03 00:58:13.661077 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-03 00:58:13.661081 | orchestrator | Tuesday 03 March 2026 00:56:05 +0000 (0:00:00.742) 0:00:05.605 ********* 2026-03-03 00:58:13.661085 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.661089 | orchestrator | 2026-03-03 00:58:13.661092 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-03 00:58:13.661096 | orchestrator | Tuesday 03 March 2026 00:56:06 +0000 (0:00:01.011) 0:00:06.616 ********* 2026-03-03 00:58:13.661100 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661104 | orchestrator | 2026-03-03 00:58:13.661110 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-03 00:58:13.661116 | orchestrator | Tuesday 03 March 2026 00:56:07 +0000 (0:00:00.638) 0:00:07.254 ********* 2026-03-03 00:58:13.661122 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661129 | orchestrator | 2026-03-03 00:58:13.661135 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-03 00:58:13.661141 | orchestrator | Tuesday 03 March 2026 00:56:08 +0000 (0:00:00.912) 0:00:08.167 ********* 2026-03-03 00:58:13.661146 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661152 | orchestrator | 2026-03-03 00:58:13.661158 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-03 00:58:13.661164 | orchestrator | Tuesday 03 March 2026 00:56:09 +0000 (0:00:00.766) 0:00:08.934 ********* 2026-03-03 00:58:13.661170 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661183 | orchestrator | 2026-03-03 00:58:13.661189 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-03 00:58:13.661194 | orchestrator | Tuesday 03 March 2026 00:56:10 +0000 (0:00:00.854) 0:00:09.788 ********* 2026-03-03 00:58:13.661200 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:58:13.661206 | orchestrator | 2026-03-03 00:58:13.661211 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-03 00:58:13.661216 | orchestrator | Tuesday 03 March 2026 00:56:10 +0000 (0:00:00.497) 0:00:10.286 ********* 2026-03-03 00:58:13.661222 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.661229 | orchestrator | 2026-03-03 00:58:13.661235 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-03 00:58:13.661241 | orchestrator | Tuesday 03 March 2026 00:56:11 +0000 (0:00:00.934) 0:00:11.220 ********* 2026-03-03 00:58:13.661246 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661252 | orchestrator | 2026-03-03 00:58:13.661265 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-03 00:58:13.661271 | orchestrator | Tuesday 03 March 2026 00:56:12 +0000 (0:00:00.802) 0:00:12.022 ********* 2026-03-03 00:58:13.661277 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661282 | orchestrator | 2026-03-03 00:58:13.661288 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-03 00:58:13.661294 | orchestrator | Tuesday 03 March 2026 00:56:12 +0000 (0:00:00.365) 0:00:12.388 ********* 2026-03-03 00:58:13.661305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661340 | orchestrator | 2026-03-03 00:58:13.661346 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-03 00:58:13.661351 | orchestrator | Tuesday 03 March 2026 00:56:13 +0000 (0:00:00.822) 0:00:13.211 ********* 2026-03-03 00:58:13.661362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661399 | orchestrator | 2026-03-03 00:58:13.661405 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-03 00:58:13.661411 | orchestrator | Tuesday 03 March 2026 00:56:15 +0000 (0:00:01.915) 0:00:15.126 ********* 2026-03-03 00:58:13.661416 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-03 00:58:13.661422 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-03 00:58:13.661428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-03 00:58:13.661433 | orchestrator | 2026-03-03 00:58:13.661439 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-03 00:58:13.661445 | orchestrator | Tuesday 03 March 2026 00:56:17 +0000 (0:00:01.999) 0:00:17.126 ********* 2026-03-03 00:58:13.661451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-03 00:58:13.661456 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-03 00:58:13.661462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-03 00:58:13.661468 | orchestrator | 2026-03-03 00:58:13.661474 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-03 00:58:13.661480 | orchestrator | Tuesday 03 March 2026 00:56:20 +0000 (0:00:02.973) 0:00:20.099 ********* 2026-03-03 00:58:13.661485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-03 00:58:13.661491 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-03 00:58:13.661497 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-03 00:58:13.661503 | orchestrator | 2026-03-03 00:58:13.661514 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-03 00:58:13.661520 | orchestrator | Tuesday 03 March 2026 00:56:22 +0000 (0:00:02.124) 0:00:22.224 ********* 2026-03-03 00:58:13.661526 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-03 00:58:13.661532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-03 00:58:13.661539 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-03 00:58:13.661546 | orchestrator | 2026-03-03 00:58:13.661552 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-03 00:58:13.661559 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:02.447) 0:00:24.671 ********* 2026-03-03 00:58:13.661565 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-03 00:58:13.661572 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-03 00:58:13.661578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-03 00:58:13.661584 | orchestrator | 2026-03-03 00:58:13.661591 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-03 00:58:13.661598 | orchestrator | Tuesday 03 March 2026 00:56:26 +0000 (0:00:01.897) 0:00:26.569 ********* 2026-03-03 00:58:13.661604 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-03 00:58:13.661610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-03 00:58:13.661616 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-03 00:58:13.661622 | orchestrator | 2026-03-03 00:58:13.661628 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-03 00:58:13.661641 | orchestrator | Tuesday 03 March 2026 00:56:28 +0000 (0:00:01.279) 0:00:27.849 ********* 2026-03-03 00:58:13.661648 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661656 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.661660 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.661664 | orchestrator | 2026-03-03 00:58:13.661668 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-03 00:58:13.661672 | orchestrator | Tuesday 03 March 2026 00:56:28 +0000 (0:00:00.413) 0:00:28.263 ********* 2026-03-03 00:58:13.661684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 00:58:13.661702 | orchestrator | 2026-03-03 00:58:13.661706 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-03 00:58:13.661710 | orchestrator | Tuesday 03 March 2026 00:56:30 +0000 (0:00:02.111) 0:00:30.374 ********* 2026-03-03 00:58:13.661714 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.661721 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.661725 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.661729 | orchestrator | 2026-03-03 00:58:13.661733 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-03 00:58:13.661737 | orchestrator | Tuesday 03 March 2026 00:56:31 +0000 (0:00:00.974) 0:00:31.349 ********* 2026-03-03 00:58:13.661741 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.661744 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.661748 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.661752 | orchestrator | 2026-03-03 00:58:13.661756 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-03 00:58:13.661760 | orchestrator | Tuesday 03 March 2026 00:56:38 +0000 (0:00:06.610) 0:00:37.960 ********* 2026-03-03 00:58:13.661764 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.661768 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.661772 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.661776 | orchestrator | 2026-03-03 00:58:13.661782 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-03 00:58:13.661788 | orchestrator | 2026-03-03 00:58:13.661794 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-03 00:58:13.661803 | orchestrator | Tuesday 03 March 2026 00:56:38 +0000 (0:00:00.318) 0:00:38.278 ********* 2026-03-03 00:58:13.661812 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.661820 | orchestrator | 2026-03-03 00:58:13.661826 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-03 00:58:13.661832 | orchestrator | Tuesday 03 March 2026 00:56:39 +0000 (0:00:00.567) 0:00:38.846 ********* 2026-03-03 00:58:13.661838 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:58:13.661845 | orchestrator | 2026-03-03 00:58:13.661851 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-03 00:58:13.661861 | orchestrator | Tuesday 03 March 2026 00:56:39 +0000 (0:00:00.274) 0:00:39.121 ********* 2026-03-03 00:58:13.661868 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.661873 | orchestrator | 2026-03-03 00:58:13.661879 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-03 00:58:13.661884 | orchestrator | Tuesday 03 March 2026 00:56:41 +0000 (0:00:01.949) 0:00:41.070 ********* 2026-03-03 00:58:13.661890 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:58:13.661896 | orchestrator | 2026-03-03 00:58:13.661902 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-03 00:58:13.661909 | orchestrator | 2026-03-03 00:58:13.661915 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-03 00:58:13.661922 | orchestrator | Tuesday 03 March 2026 00:57:34 +0000 (0:00:53.240) 0:01:34.310 ********* 2026-03-03 00:58:13.661928 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.661934 | orchestrator | 2026-03-03 00:58:13.661940 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-03 00:58:13.661946 | orchestrator | Tuesday 03 March 2026 00:57:35 +0000 (0:00:00.579) 0:01:34.890 ********* 2026-03-03 00:58:13.661950 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:58:13.661956 | orchestrator | 2026-03-03 00:58:13.661963 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-03 00:58:13.661969 | orchestrator | Tuesday 03 March 2026 00:57:35 +0000 (0:00:00.228) 0:01:35.119 ********* 2026-03-03 00:58:13.661975 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.661981 | orchestrator | 2026-03-03 00:58:13.661988 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-03 00:58:13.662160 | orchestrator | Tuesday 03 March 2026 00:57:37 +0000 (0:00:02.019) 0:01:37.139 ********* 2026-03-03 00:58:13.662182 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:58:13.662187 | orchestrator | 2026-03-03 00:58:13.662191 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-03 00:58:13.662195 | orchestrator | 2026-03-03 00:58:13.662199 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-03 00:58:13.662211 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:14.099) 0:01:51.238 ********* 2026-03-03 00:58:13.662215 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.662219 | orchestrator | 2026-03-03 00:58:13.662223 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-03 00:58:13.662227 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:00.772) 0:01:52.010 ********* 2026-03-03 00:58:13.662230 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:58:13.662234 | orchestrator | 2026-03-03 00:58:13.662238 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-03 00:58:13.662246 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:00.264) 0:01:52.274 ********* 2026-03-03 00:58:13.662250 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.662254 | orchestrator | 2026-03-03 00:58:13.662258 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-03 00:58:13.662261 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:02.153) 0:01:54.427 ********* 2026-03-03 00:58:13.662265 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:58:13.662269 | orchestrator | 2026-03-03 00:58:13.662273 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-03 00:58:13.662277 | orchestrator | 2026-03-03 00:58:13.662280 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-03 00:58:13.662284 | orchestrator | Tuesday 03 March 2026 00:58:08 +0000 (0:00:13.340) 0:02:07.767 ********* 2026-03-03 00:58:13.662288 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:58:13.662292 | orchestrator | 2026-03-03 00:58:13.662296 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-03 00:58:13.662300 | orchestrator | Tuesday 03 March 2026 00:58:09 +0000 (0:00:00.904) 0:02:08.672 ********* 2026-03-03 00:58:13.662304 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:58:13.662308 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:58:13.662311 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:58:13.662315 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-03 00:58:13.662319 | orchestrator | enable_outward_rabbitmq_True 2026-03-03 00:58:13.662323 | orchestrator | 2026-03-03 00:58:13.662327 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-03 00:58:13.662330 | orchestrator | skipping: no hosts matched 2026-03-03 00:58:13.662334 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-03 00:58:13.662338 | orchestrator | outward_rabbitmq_restart 2026-03-03 00:58:13.662342 | orchestrator | 2026-03-03 00:58:13.662346 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-03 00:58:13.662350 | orchestrator | skipping: no hosts matched 2026-03-03 00:58:13.662353 | orchestrator | 2026-03-03 00:58:13.662357 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-03 00:58:13.662361 | orchestrator | skipping: no hosts matched 2026-03-03 00:58:13.662365 | orchestrator | 2026-03-03 00:58:13.662369 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:58:13.662373 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-03 00:58:13.662378 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-03 00:58:13.662382 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:58:13.662386 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 00:58:13.662390 | orchestrator | 2026-03-03 00:58:13.662394 | orchestrator | 2026-03-03 00:58:13.662397 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:58:13.662405 | orchestrator | Tuesday 03 March 2026 00:58:11 +0000 (0:00:02.875) 0:02:11.548 ********* 2026-03-03 00:58:13.662416 | orchestrator | =============================================================================== 2026-03-03 00:58:13.662421 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.68s 2026-03-03 00:58:13.662424 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.61s 2026-03-03 00:58:13.662428 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.12s 2026-03-03 00:58:13.662432 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.16s 2026-03-03 00:58:13.662436 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.97s 2026-03-03 00:58:13.662440 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.88s 2026-03-03 00:58:13.662444 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.45s 2026-03-03 00:58:13.662448 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.12s 2026-03-03 00:58:13.662452 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.11s 2026-03-03 00:58:13.662456 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.00s 2026-03-03 00:58:13.662460 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.92s 2026-03-03 00:58:13.662464 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.92s 2026-03-03 00:58:13.662467 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.90s 2026-03-03 00:58:13.662471 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.28s 2026-03-03 00:58:13.662475 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-03-03 00:58:13.662479 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.97s 2026-03-03 00:58:13.662482 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2026-03-03 00:58:13.662486 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-03-03 00:58:13.662490 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 0.91s 2026-03-03 00:58:13.662494 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.90s 2026-03-03 00:58:13.662498 | orchestrator | 2026-03-03 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:16.742362 | orchestrator | 2026-03-03 00:58:16 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:16.742643 | orchestrator | 2026-03-03 00:58:16 | INFO  | Task aa563303-9ae9-4fc0-84d8-477abe2f92a1 is in state STARTED 2026-03-03 00:58:16.745213 | orchestrator | 2026-03-03 00:58:16 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:16.745455 | orchestrator | 2026-03-03 00:58:16 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:16.747778 | orchestrator | 2026-03-03 00:58:16 | INFO  | Task 0b327cef-31fc-44a9-80ca-3821c18b9694 is in state STARTED 2026-03-03 00:58:16.747832 | orchestrator | 2026-03-03 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:19.799091 | orchestrator | 2026-03-03 00:58:19 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:19.801384 | orchestrator | 2026-03-03 00:58:19 | INFO  | Task aa563303-9ae9-4fc0-84d8-477abe2f92a1 is in state STARTED 2026-03-03 00:58:19.803332 | orchestrator | 2026-03-03 00:58:19 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:19.806597 | orchestrator | 2026-03-03 00:58:19 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:19.808740 | orchestrator | 2026-03-03 00:58:19 | INFO  | Task 0b327cef-31fc-44a9-80ca-3821c18b9694 is in state SUCCESS 2026-03-03 00:58:19.808812 | orchestrator | 2026-03-03 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:22.847637 | orchestrator | 2026-03-03 00:58:22 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:22.847918 | orchestrator | 2026-03-03 00:58:22 | INFO  | Task aa563303-9ae9-4fc0-84d8-477abe2f92a1 is in state STARTED 2026-03-03 00:58:22.848429 | orchestrator | 2026-03-03 00:58:22 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:22.849109 | orchestrator | 2026-03-03 00:58:22 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:22.849131 | orchestrator | 2026-03-03 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:25.885037 | orchestrator | 2026-03-03 00:58:25 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:25.885128 | orchestrator | 2026-03-03 00:58:25 | INFO  | Task aa563303-9ae9-4fc0-84d8-477abe2f92a1 is in state SUCCESS 2026-03-03 00:58:25.885138 | orchestrator | 2026-03-03 00:58:25 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:25.887375 | orchestrator | 2026-03-03 00:58:25 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:25.887450 | orchestrator | 2026-03-03 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:28.925825 | orchestrator | 2026-03-03 00:58:28 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:28.927908 | orchestrator | 2026-03-03 00:58:28 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:28.929776 | orchestrator | 2026-03-03 00:58:28 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:28.929821 | orchestrator | 2026-03-03 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:31.963630 | orchestrator | 2026-03-03 00:58:31 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:31.963750 | orchestrator | 2026-03-03 00:58:31 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:31.964486 | orchestrator | 2026-03-03 00:58:31 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:31.964544 | orchestrator | 2026-03-03 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:35.003426 | orchestrator | 2026-03-03 00:58:35 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:35.003731 | orchestrator | 2026-03-03 00:58:35 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:35.004751 | orchestrator | 2026-03-03 00:58:35 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:35.004798 | orchestrator | 2026-03-03 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:38.042753 | orchestrator | 2026-03-03 00:58:38 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:38.043828 | orchestrator | 2026-03-03 00:58:38 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:38.048770 | orchestrator | 2026-03-03 00:58:38 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:38.048865 | orchestrator | 2026-03-03 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:41.089239 | orchestrator | 2026-03-03 00:58:41 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:41.090760 | orchestrator | 2026-03-03 00:58:41 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:41.092128 | orchestrator | 2026-03-03 00:58:41 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:41.092170 | orchestrator | 2026-03-03 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:44.123819 | orchestrator | 2026-03-03 00:58:44 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:44.125427 | orchestrator | 2026-03-03 00:58:44 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:44.126747 | orchestrator | 2026-03-03 00:58:44 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:44.126807 | orchestrator | 2026-03-03 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:47.161789 | orchestrator | 2026-03-03 00:58:47 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:47.165338 | orchestrator | 2026-03-03 00:58:47 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:47.168486 | orchestrator | 2026-03-03 00:58:47 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:47.168539 | orchestrator | 2026-03-03 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:50.203006 | orchestrator | 2026-03-03 00:58:50 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:50.206587 | orchestrator | 2026-03-03 00:58:50 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:50.211183 | orchestrator | 2026-03-03 00:58:50 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:50.211894 | orchestrator | 2026-03-03 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:53.248485 | orchestrator | 2026-03-03 00:58:53 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:53.248743 | orchestrator | 2026-03-03 00:58:53 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:53.249435 | orchestrator | 2026-03-03 00:58:53 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:53.249456 | orchestrator | 2026-03-03 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:56.271941 | orchestrator | 2026-03-03 00:58:56 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:56.272406 | orchestrator | 2026-03-03 00:58:56 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:56.273441 | orchestrator | 2026-03-03 00:58:56 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:56.273463 | orchestrator | 2026-03-03 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:58:59.317760 | orchestrator | 2026-03-03 00:58:59 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:58:59.318665 | orchestrator | 2026-03-03 00:58:59 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:58:59.321399 | orchestrator | 2026-03-03 00:58:59 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:58:59.321449 | orchestrator | 2026-03-03 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:02.360257 | orchestrator | 2026-03-03 00:59:02 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:02.361950 | orchestrator | 2026-03-03 00:59:02 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:02.363609 | orchestrator | 2026-03-03 00:59:02 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state STARTED 2026-03-03 00:59:02.363696 | orchestrator | 2026-03-03 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:05.406331 | orchestrator | 2026-03-03 00:59:05 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:05.406589 | orchestrator | 2026-03-03 00:59:05 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:05.408392 | orchestrator | 2026-03-03 00:59:05 | INFO  | Task 7d59ba8a-cf5d-4052-95a8-f1c7c45cc559 is in state SUCCESS 2026-03-03 00:59:05.409885 | orchestrator | 2026-03-03 00:59:05.409917 | orchestrator | 2026-03-03 00:59:05.409926 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-03 00:59:05.409934 | orchestrator | 2026-03-03 00:59:05.409941 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-03 00:59:05.409949 | orchestrator | Tuesday 03 March 2026 00:58:17 +0000 (0:00:00.120) 0:00:00.120 ********* 2026-03-03 00:59:05.409957 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-03 00:59:05.409964 | orchestrator | 2026-03-03 00:59:05.409971 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-03 00:59:05.409978 | orchestrator | Tuesday 03 March 2026 00:58:17 +0000 (0:00:00.709) 0:00:00.829 ********* 2026-03-03 00:59:05.409985 | orchestrator | changed: [testbed-manager] 2026-03-03 00:59:05.409992 | orchestrator | 2026-03-03 00:59:05.409999 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-03 00:59:05.410006 | orchestrator | Tuesday 03 March 2026 00:58:18 +0000 (0:00:00.953) 0:00:01.783 ********* 2026-03-03 00:59:05.410111 | orchestrator | changed: [testbed-manager] 2026-03-03 00:59:05.410123 | orchestrator | 2026-03-03 00:59:05.410131 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:59:05.410138 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:59:05.410147 | orchestrator | 2026-03-03 00:59:05.410155 | orchestrator | 2026-03-03 00:59:05.410163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:59:05.410170 | orchestrator | Tuesday 03 March 2026 00:58:19 +0000 (0:00:00.353) 0:00:02.136 ********* 2026-03-03 00:59:05.410178 | orchestrator | =============================================================================== 2026-03-03 00:59:05.410185 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.95s 2026-03-03 00:59:05.410193 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2026-03-03 00:59:05.410201 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.35s 2026-03-03 00:59:05.410208 | orchestrator | 2026-03-03 00:59:05.410216 | orchestrator | 2026-03-03 00:59:05.410224 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-03 00:59:05.410231 | orchestrator | 2026-03-03 00:59:05.410238 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-03 00:59:05.410245 | orchestrator | Tuesday 03 March 2026 00:58:17 +0000 (0:00:00.174) 0:00:00.174 ********* 2026-03-03 00:59:05.410253 | orchestrator | ok: [testbed-manager] 2026-03-03 00:59:05.410261 | orchestrator | 2026-03-03 00:59:05.410268 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-03 00:59:05.410276 | orchestrator | Tuesday 03 March 2026 00:58:18 +0000 (0:00:00.495) 0:00:00.669 ********* 2026-03-03 00:59:05.410284 | orchestrator | ok: [testbed-manager] 2026-03-03 00:59:05.410291 | orchestrator | 2026-03-03 00:59:05.410310 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-03 00:59:05.410318 | orchestrator | Tuesday 03 March 2026 00:58:18 +0000 (0:00:00.576) 0:00:01.246 ********* 2026-03-03 00:59:05.410325 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-03 00:59:05.410344 | orchestrator | 2026-03-03 00:59:05.410352 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-03 00:59:05.410359 | orchestrator | Tuesday 03 March 2026 00:58:19 +0000 (0:00:00.676) 0:00:01.922 ********* 2026-03-03 00:59:05.410366 | orchestrator | changed: [testbed-manager] 2026-03-03 00:59:05.410373 | orchestrator | 2026-03-03 00:59:05.410380 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-03 00:59:05.410387 | orchestrator | Tuesday 03 March 2026 00:58:20 +0000 (0:00:01.259) 0:00:03.182 ********* 2026-03-03 00:59:05.410394 | orchestrator | changed: [testbed-manager] 2026-03-03 00:59:05.410402 | orchestrator | 2026-03-03 00:59:05.410409 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-03 00:59:05.410415 | orchestrator | Tuesday 03 March 2026 00:58:21 +0000 (0:00:00.497) 0:00:03.680 ********* 2026-03-03 00:59:05.410422 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-03 00:59:05.410430 | orchestrator | 2026-03-03 00:59:05.410436 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-03 00:59:05.410443 | orchestrator | Tuesday 03 March 2026 00:58:22 +0000 (0:00:01.493) 0:00:05.173 ********* 2026-03-03 00:59:05.410451 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-03 00:59:05.410475 | orchestrator | 2026-03-03 00:59:05.410482 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-03 00:59:05.410489 | orchestrator | Tuesday 03 March 2026 00:58:23 +0000 (0:00:00.770) 0:00:05.944 ********* 2026-03-03 00:59:05.410497 | orchestrator | ok: [testbed-manager] 2026-03-03 00:59:05.410504 | orchestrator | 2026-03-03 00:59:05.410511 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-03 00:59:05.410518 | orchestrator | Tuesday 03 March 2026 00:58:23 +0000 (0:00:00.338) 0:00:06.283 ********* 2026-03-03 00:59:05.410525 | orchestrator | ok: [testbed-manager] 2026-03-03 00:59:05.410533 | orchestrator | 2026-03-03 00:59:05.410540 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:59:05.410547 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:59:05.410554 | orchestrator | 2026-03-03 00:59:05.410562 | orchestrator | 2026-03-03 00:59:05.410570 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:59:05.410578 | orchestrator | Tuesday 03 March 2026 00:58:23 +0000 (0:00:00.249) 0:00:06.532 ********* 2026-03-03 00:59:05.410586 | orchestrator | =============================================================================== 2026-03-03 00:59:05.410593 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2026-03-03 00:59:05.410607 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2026-03-03 00:59:05.410616 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2026-03-03 00:59:05.410634 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.68s 2026-03-03 00:59:05.410643 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2026-03-03 00:59:05.410651 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.50s 2026-03-03 00:59:05.410659 | orchestrator | Get home directory of operator user ------------------------------------- 0.50s 2026-03-03 00:59:05.410667 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2026-03-03 00:59:05.410675 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.25s 2026-03-03 00:59:05.410684 | orchestrator | 2026-03-03 00:59:05.410692 | orchestrator | 2026-03-03 00:59:05.410699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 00:59:05.410708 | orchestrator | 2026-03-03 00:59:05.410716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 00:59:05.410724 | orchestrator | Tuesday 03 March 2026 00:56:46 +0000 (0:00:00.312) 0:00:00.312 ********* 2026-03-03 00:59:05.410732 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:59:05.410741 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:59:05.410755 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:59:05.410763 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.410771 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.410779 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.410787 | orchestrator | 2026-03-03 00:59:05.410795 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 00:59:05.410804 | orchestrator | Tuesday 03 March 2026 00:56:46 +0000 (0:00:00.867) 0:00:01.179 ********* 2026-03-03 00:59:05.410812 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-03 00:59:05.410885 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-03 00:59:05.410893 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-03 00:59:05.410900 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-03 00:59:05.410908 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-03 00:59:05.410914 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-03 00:59:05.410921 | orchestrator | 2026-03-03 00:59:05.410928 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-03 00:59:05.410935 | orchestrator | 2026-03-03 00:59:05.410943 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-03 00:59:05.410950 | orchestrator | Tuesday 03 March 2026 00:56:48 +0000 (0:00:01.291) 0:00:02.470 ********* 2026-03-03 00:59:05.410958 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:59:05.410966 | orchestrator | 2026-03-03 00:59:05.410973 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-03 00:59:05.410980 | orchestrator | Tuesday 03 March 2026 00:56:49 +0000 (0:00:01.262) 0:00:03.733 ********* 2026-03-03 00:59:05.410988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.410998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411051 | orchestrator | 2026-03-03 00:59:05.411059 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-03 00:59:05.411066 | orchestrator | Tuesday 03 March 2026 00:56:50 +0000 (0:00:01.329) 0:00:05.063 ********* 2026-03-03 00:59:05.411073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411116 | orchestrator | 2026-03-03 00:59:05.411123 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-03 00:59:05.411130 | orchestrator | Tuesday 03 March 2026 00:56:52 +0000 (0:00:01.874) 0:00:06.937 ********* 2026-03-03 00:59:05.411137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411209 | orchestrator | 2026-03-03 00:59:05.411217 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-03 00:59:05.411224 | orchestrator | Tuesday 03 March 2026 00:56:53 +0000 (0:00:01.262) 0:00:08.200 ********* 2026-03-03 00:59:05.411231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411293 | orchestrator | 2026-03-03 00:59:05.411301 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-03 00:59:05.411308 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:01.586) 0:00:09.787 ********* 2026-03-03 00:59:05.411316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.411364 | orchestrator | 2026-03-03 00:59:05.411372 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-03 00:59:05.411379 | orchestrator | Tuesday 03 March 2026 00:56:57 +0000 (0:00:01.555) 0:00:11.343 ********* 2026-03-03 00:59:05.411386 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:59:05.411394 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.411401 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:59:05.411409 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:59:05.411416 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.411423 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.411430 | orchestrator | 2026-03-03 00:59:05.411437 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-03 00:59:05.411444 | orchestrator | Tuesday 03 March 2026 00:57:00 +0000 (0:00:03.115) 0:00:14.458 ********* 2026-03-03 00:59:05.411451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-03 00:59:05.411459 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-03 00:59:05.411469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-03 00:59:05.411483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-03 00:59:05.411491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-03 00:59:05.411498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-03 00:59:05.411505 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-03 00:59:05.411512 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-03 00:59:05.411519 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-03 00:59:05.411527 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-03 00:59:05.411533 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-03 00:59:05.411539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-03 00:59:05.411546 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-03 00:59:05.411553 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-03 00:59:05.411560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-03 00:59:05.411567 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-03 00:59:05.411574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-03 00:59:05.411582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-03 00:59:05.411589 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-03 00:59:05.411597 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-03 00:59:05.411604 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-03 00:59:05.411611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-03 00:59:05.411625 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-03 00:59:05.411631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-03 00:59:05.411636 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-03 00:59:05.411643 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-03 00:59:05.411649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-03 00:59:05.411655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-03 00:59:05.411661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-03 00:59:05.411666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-03 00:59:05.411673 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-03 00:59:05.411679 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-03 00:59:05.411685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-03 00:59:05.411691 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-03 00:59:05.411697 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-03 00:59:05.411702 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-03 00:59:05.411709 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-03 00:59:05.411716 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-03 00:59:05.411722 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-03 00:59:05.411731 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-03 00:59:05.411738 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-03 00:59:05.411748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-03 00:59:05.411755 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-03 00:59:05.411761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-03 00:59:05.411768 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-03 00:59:05.411774 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-03 00:59:05.411781 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-03 00:59:05.411787 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-03 00:59:05.411793 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-03 00:59:05.411800 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-03 00:59:05.411807 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-03 00:59:05.411834 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-03 00:59:05.411843 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-03 00:59:05.411849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-03 00:59:05.411856 | orchestrator | 2026-03-03 00:59:05.411863 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-03 00:59:05.411870 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:19.116) 0:00:33.575 ********* 2026-03-03 00:59:05.411877 | orchestrator | 2026-03-03 00:59:05.411884 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-03 00:59:05.411890 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:00.072) 0:00:33.648 ********* 2026-03-03 00:59:05.411897 | orchestrator | 2026-03-03 00:59:05.411905 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-03 00:59:05.411911 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:00.067) 0:00:33.716 ********* 2026-03-03 00:59:05.411918 | orchestrator | 2026-03-03 00:59:05.411925 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-03 00:59:05.411931 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:00.076) 0:00:33.792 ********* 2026-03-03 00:59:05.411937 | orchestrator | 2026-03-03 00:59:05.411943 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-03 00:59:05.411949 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:00.064) 0:00:33.856 ********* 2026-03-03 00:59:05.411955 | orchestrator | 2026-03-03 00:59:05.411962 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-03 00:59:05.411969 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:00.069) 0:00:33.926 ********* 2026-03-03 00:59:05.411976 | orchestrator | 2026-03-03 00:59:05.411982 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-03 00:59:05.411989 | orchestrator | Tuesday 03 March 2026 00:57:19 +0000 (0:00:00.069) 0:00:33.995 ********* 2026-03-03 00:59:05.411995 | orchestrator | ok: [testbed-node-3] 2026-03-03 00:59:05.412002 | orchestrator | ok: [testbed-node-5] 2026-03-03 00:59:05.412007 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412013 | orchestrator | ok: [testbed-node-4] 2026-03-03 00:59:05.412019 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412025 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412031 | orchestrator | 2026-03-03 00:59:05.412037 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-03 00:59:05.412043 | orchestrator | Tuesday 03 March 2026 00:57:21 +0000 (0:00:01.567) 0:00:35.563 ********* 2026-03-03 00:59:05.412048 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.412055 | orchestrator | changed: [testbed-node-4] 2026-03-03 00:59:05.412061 | orchestrator | changed: [testbed-node-3] 2026-03-03 00:59:05.412067 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.412073 | orchestrator | changed: [testbed-node-5] 2026-03-03 00:59:05.412079 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.412085 | orchestrator | 2026-03-03 00:59:05.412091 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-03 00:59:05.412097 | orchestrator | 2026-03-03 00:59:05.412103 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-03 00:59:05.412109 | orchestrator | Tuesday 03 March 2026 00:57:46 +0000 (0:00:25.471) 0:01:01.034 ********* 2026-03-03 00:59:05.412115 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:59:05.412121 | orchestrator | 2026-03-03 00:59:05.412127 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-03 00:59:05.412134 | orchestrator | Tuesday 03 March 2026 00:57:47 +0000 (0:00:00.988) 0:01:02.022 ********* 2026-03-03 00:59:05.412153 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:59:05.412160 | orchestrator | 2026-03-03 00:59:05.412173 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-03 00:59:05.412179 | orchestrator | Tuesday 03 March 2026 00:57:48 +0000 (0:00:00.896) 0:01:02.919 ********* 2026-03-03 00:59:05.412186 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412193 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412199 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412206 | orchestrator | 2026-03-03 00:59:05.412212 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-03 00:59:05.412218 | orchestrator | Tuesday 03 March 2026 00:57:50 +0000 (0:00:01.301) 0:01:04.220 ********* 2026-03-03 00:59:05.412224 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412231 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412237 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412243 | orchestrator | 2026-03-03 00:59:05.412250 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-03 00:59:05.412256 | orchestrator | Tuesday 03 March 2026 00:57:50 +0000 (0:00:00.536) 0:01:04.757 ********* 2026-03-03 00:59:05.412262 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412268 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412274 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412280 | orchestrator | 2026-03-03 00:59:05.412285 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-03 00:59:05.412291 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:00.692) 0:01:05.449 ********* 2026-03-03 00:59:05.412297 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412302 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412308 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412313 | orchestrator | 2026-03-03 00:59:05.412319 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-03 00:59:05.412324 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:00.481) 0:01:05.930 ********* 2026-03-03 00:59:05.412330 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412336 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412342 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412348 | orchestrator | 2026-03-03 00:59:05.412354 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-03 00:59:05.412360 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:00.684) 0:01:06.615 ********* 2026-03-03 00:59:05.412366 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412372 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412378 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412384 | orchestrator | 2026-03-03 00:59:05.412390 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-03 00:59:05.412396 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:00.387) 0:01:07.002 ********* 2026-03-03 00:59:05.412402 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412408 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412414 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412420 | orchestrator | 2026-03-03 00:59:05.412426 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-03 00:59:05.412432 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:00.648) 0:01:07.650 ********* 2026-03-03 00:59:05.412438 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412444 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412451 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412458 | orchestrator | 2026-03-03 00:59:05.412464 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-03 00:59:05.412470 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:00.324) 0:01:07.975 ********* 2026-03-03 00:59:05.412477 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412483 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412495 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412501 | orchestrator | 2026-03-03 00:59:05.412508 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-03 00:59:05.412514 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.503) 0:01:08.478 ********* 2026-03-03 00:59:05.412519 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412525 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412531 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412537 | orchestrator | 2026-03-03 00:59:05.412544 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-03 00:59:05.412551 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.417) 0:01:08.896 ********* 2026-03-03 00:59:05.412558 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412565 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412571 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412577 | orchestrator | 2026-03-03 00:59:05.412585 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-03 00:59:05.412591 | orchestrator | Tuesday 03 March 2026 00:57:55 +0000 (0:00:00.461) 0:01:09.357 ********* 2026-03-03 00:59:05.412598 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412605 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412612 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412620 | orchestrator | 2026-03-03 00:59:05.412628 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-03 00:59:05.412635 | orchestrator | Tuesday 03 March 2026 00:57:55 +0000 (0:00:00.288) 0:01:09.646 ********* 2026-03-03 00:59:05.412642 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412650 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412658 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412665 | orchestrator | 2026-03-03 00:59:05.412672 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-03 00:59:05.412679 | orchestrator | Tuesday 03 March 2026 00:57:55 +0000 (0:00:00.386) 0:01:10.032 ********* 2026-03-03 00:59:05.412687 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412694 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412701 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412708 | orchestrator | 2026-03-03 00:59:05.412716 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-03 00:59:05.412723 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.325) 0:01:10.358 ********* 2026-03-03 00:59:05.412735 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412742 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412750 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412757 | orchestrator | 2026-03-03 00:59:05.412765 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-03 00:59:05.412779 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.268) 0:01:10.627 ********* 2026-03-03 00:59:05.412787 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412795 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412803 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412811 | orchestrator | 2026-03-03 00:59:05.412851 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-03 00:59:05.412859 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.249) 0:01:10.876 ********* 2026-03-03 00:59:05.412866 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.412873 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.412880 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.412886 | orchestrator | 2026-03-03 00:59:05.412893 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-03 00:59:05.412900 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.248) 0:01:11.124 ********* 2026-03-03 00:59:05.412908 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 00:59:05.412922 | orchestrator | 2026-03-03 00:59:05.412930 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-03 00:59:05.412938 | orchestrator | Tuesday 03 March 2026 00:57:57 +0000 (0:00:00.733) 0:01:11.858 ********* 2026-03-03 00:59:05.412945 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.412953 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.412962 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.412969 | orchestrator | 2026-03-03 00:59:05.412980 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-03 00:59:05.412990 | orchestrator | Tuesday 03 March 2026 00:57:58 +0000 (0:00:00.496) 0:01:12.354 ********* 2026-03-03 00:59:05.413001 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.413011 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.413022 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.413032 | orchestrator | 2026-03-03 00:59:05.413039 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-03 00:59:05.413046 | orchestrator | Tuesday 03 March 2026 00:57:58 +0000 (0:00:00.701) 0:01:13.055 ********* 2026-03-03 00:59:05.413054 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413062 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413071 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413078 | orchestrator | 2026-03-03 00:59:05.413086 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-03 00:59:05.413094 | orchestrator | Tuesday 03 March 2026 00:57:59 +0000 (0:00:00.720) 0:01:13.776 ********* 2026-03-03 00:59:05.413102 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413110 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413118 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413126 | orchestrator | 2026-03-03 00:59:05.413134 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-03 00:59:05.413142 | orchestrator | Tuesday 03 March 2026 00:58:00 +0000 (0:00:00.554) 0:01:14.331 ********* 2026-03-03 00:59:05.413150 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413158 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413166 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413173 | orchestrator | 2026-03-03 00:59:05.413181 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-03 00:59:05.413189 | orchestrator | Tuesday 03 March 2026 00:58:00 +0000 (0:00:00.574) 0:01:14.906 ********* 2026-03-03 00:59:05.413197 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413204 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413212 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413220 | orchestrator | 2026-03-03 00:59:05.413227 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-03 00:59:05.413235 | orchestrator | Tuesday 03 March 2026 00:58:01 +0000 (0:00:00.350) 0:01:15.256 ********* 2026-03-03 00:59:05.413243 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413251 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413259 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413266 | orchestrator | 2026-03-03 00:59:05.413273 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-03 00:59:05.413280 | orchestrator | Tuesday 03 March 2026 00:58:01 +0000 (0:00:00.866) 0:01:16.123 ********* 2026-03-03 00:59:05.413287 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413294 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413301 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413308 | orchestrator | 2026-03-03 00:59:05.413315 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-03 00:59:05.413322 | orchestrator | Tuesday 03 March 2026 00:58:02 +0000 (0:00:00.546) 0:01:16.670 ********* 2026-03-03 00:59:05.413331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413416 | orchestrator | 2026-03-03 00:59:05.413423 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-03 00:59:05.413430 | orchestrator | Tuesday 03 March 2026 00:58:04 +0000 (0:00:02.265) 0:01:18.935 ********* 2026-03-03 00:59:05.413437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413512 | orchestrator | 2026-03-03 00:59:05.413518 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-03 00:59:05.413524 | orchestrator | Tuesday 03 March 2026 00:58:09 +0000 (0:00:04.956) 0:01:23.892 ********* 2026-03-03 00:59:05.413530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.413599 | orchestrator | 2026-03-03 00:59:05.413605 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-03 00:59:05.413611 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:02.871) 0:01:26.764 ********* 2026-03-03 00:59:05.413617 | orchestrator | 2026-03-03 00:59:05.413623 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-03 00:59:05.413629 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:00.048) 0:01:26.812 ********* 2026-03-03 00:59:05.413635 | orchestrator | 2026-03-03 00:59:05.413641 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-03 00:59:05.413647 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:00.049) 0:01:26.862 ********* 2026-03-03 00:59:05.413657 | orchestrator | 2026-03-03 00:59:05.413663 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-03 00:59:05.413669 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:00.052) 0:01:26.915 ********* 2026-03-03 00:59:05.413675 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.413681 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.413687 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.413693 | orchestrator | 2026-03-03 00:59:05.413700 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-03 00:59:05.413706 | orchestrator | Tuesday 03 March 2026 00:58:15 +0000 (0:00:02.802) 0:01:29.717 ********* 2026-03-03 00:59:05.413712 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.413719 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.413725 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.413732 | orchestrator | 2026-03-03 00:59:05.413737 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-03 00:59:05.413744 | orchestrator | Tuesday 03 March 2026 00:58:18 +0000 (0:00:02.732) 0:01:32.449 ********* 2026-03-03 00:59:05.413750 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.413757 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.413764 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.413770 | orchestrator | 2026-03-03 00:59:05.413776 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-03 00:59:05.413783 | orchestrator | Tuesday 03 March 2026 00:58:26 +0000 (0:00:07.851) 0:01:40.300 ********* 2026-03-03 00:59:05.413789 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.413795 | orchestrator | 2026-03-03 00:59:05.413802 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-03 00:59:05.413808 | orchestrator | Tuesday 03 March 2026 00:58:26 +0000 (0:00:00.114) 0:01:40.415 ********* 2026-03-03 00:59:05.413814 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.413831 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.413838 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.413844 | orchestrator | 2026-03-03 00:59:05.413851 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-03 00:59:05.413857 | orchestrator | Tuesday 03 March 2026 00:58:27 +0000 (0:00:00.835) 0:01:41.251 ********* 2026-03-03 00:59:05.413863 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413870 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413876 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.413882 | orchestrator | 2026-03-03 00:59:05.413889 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-03 00:59:05.413898 | orchestrator | Tuesday 03 March 2026 00:58:27 +0000 (0:00:00.689) 0:01:41.941 ********* 2026-03-03 00:59:05.413904 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.413911 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.413917 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.413924 | orchestrator | 2026-03-03 00:59:05.413934 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-03 00:59:05.413940 | orchestrator | Tuesday 03 March 2026 00:58:28 +0000 (0:00:00.809) 0:01:42.750 ********* 2026-03-03 00:59:05.413946 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.413953 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.413960 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.413966 | orchestrator | 2026-03-03 00:59:05.413972 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-03 00:59:05.413979 | orchestrator | Tuesday 03 March 2026 00:58:29 +0000 (0:00:00.683) 0:01:43.434 ********* 2026-03-03 00:59:05.413986 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.413992 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.413999 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414006 | orchestrator | 2026-03-03 00:59:05.414040 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-03 00:59:05.414053 | orchestrator | Tuesday 03 March 2026 00:58:29 +0000 (0:00:00.688) 0:01:44.122 ********* 2026-03-03 00:59:05.414061 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.414067 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.414074 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414081 | orchestrator | 2026-03-03 00:59:05.414087 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-03 00:59:05.414094 | orchestrator | Tuesday 03 March 2026 00:58:30 +0000 (0:00:00.677) 0:01:44.800 ********* 2026-03-03 00:59:05.414100 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.414105 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.414112 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414119 | orchestrator | 2026-03-03 00:59:05.414125 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-03 00:59:05.414132 | orchestrator | Tuesday 03 March 2026 00:58:30 +0000 (0:00:00.243) 0:01:45.044 ********* 2026-03-03 00:59:05.414139 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414146 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414153 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414160 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414168 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414175 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414212 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414219 | orchestrator | 2026-03-03 00:59:05.414226 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-03 00:59:05.414232 | orchestrator | Tuesday 03 March 2026 00:58:32 +0000 (0:00:01.387) 0:01:46.432 ********* 2026-03-03 00:59:05.414238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414245 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414251 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414258 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414305 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414342 | orchestrator | 2026-03-03 00:59:05.414350 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-03 00:59:05.414357 | orchestrator | Tuesday 03 March 2026 00:58:36 +0000 (0:00:04.182) 0:01:50.614 ********* 2026-03-03 00:59:05.414364 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414372 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414380 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414432 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 00:59:05.414440 | orchestrator | 2026-03-03 00:59:05.414448 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-03 00:59:05.414459 | orchestrator | Tuesday 03 March 2026 00:58:39 +0000 (0:00:03.000) 0:01:53.615 ********* 2026-03-03 00:59:05.414467 | orchestrator | 2026-03-03 00:59:05.414475 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-03 00:59:05.414486 | orchestrator | Tuesday 03 March 2026 00:58:39 +0000 (0:00:00.069) 0:01:53.685 ********* 2026-03-03 00:59:05.414494 | orchestrator | 2026-03-03 00:59:05.414501 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-03 00:59:05.414508 | orchestrator | Tuesday 03 March 2026 00:58:39 +0000 (0:00:00.060) 0:01:53.746 ********* 2026-03-03 00:59:05.414515 | orchestrator | 2026-03-03 00:59:05.414522 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-03 00:59:05.414530 | orchestrator | Tuesday 03 March 2026 00:58:39 +0000 (0:00:00.058) 0:01:53.804 ********* 2026-03-03 00:59:05.414536 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.414543 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.414550 | orchestrator | 2026-03-03 00:59:05.414558 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-03 00:59:05.414565 | orchestrator | Tuesday 03 March 2026 00:58:45 +0000 (0:00:06.170) 0:01:59.975 ********* 2026-03-03 00:59:05.414573 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.414581 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.414589 | orchestrator | 2026-03-03 00:59:05.414597 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-03 00:59:05.414604 | orchestrator | Tuesday 03 March 2026 00:58:51 +0000 (0:00:06.107) 0:02:06.083 ********* 2026-03-03 00:59:05.414612 | orchestrator | changed: [testbed-node-1] 2026-03-03 00:59:05.414620 | orchestrator | changed: [testbed-node-2] 2026-03-03 00:59:05.414628 | orchestrator | 2026-03-03 00:59:05.414636 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-03 00:59:05.414643 | orchestrator | Tuesday 03 March 2026 00:58:58 +0000 (0:00:06.335) 0:02:12.419 ********* 2026-03-03 00:59:05.414651 | orchestrator | skipping: [testbed-node-0] 2026-03-03 00:59:05.414659 | orchestrator | 2026-03-03 00:59:05.414666 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-03 00:59:05.414674 | orchestrator | Tuesday 03 March 2026 00:58:58 +0000 (0:00:00.141) 0:02:12.560 ********* 2026-03-03 00:59:05.414682 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.414690 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.414698 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414706 | orchestrator | 2026-03-03 00:59:05.414713 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-03 00:59:05.414721 | orchestrator | Tuesday 03 March 2026 00:58:59 +0000 (0:00:00.780) 0:02:13.341 ********* 2026-03-03 00:59:05.414729 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.414737 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.414744 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.414752 | orchestrator | 2026-03-03 00:59:05.414760 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-03 00:59:05.414767 | orchestrator | Tuesday 03 March 2026 00:58:59 +0000 (0:00:00.679) 0:02:14.020 ********* 2026-03-03 00:59:05.414775 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.414782 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.414790 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414798 | orchestrator | 2026-03-03 00:59:05.414806 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-03 00:59:05.414853 | orchestrator | Tuesday 03 March 2026 00:59:00 +0000 (0:00:00.815) 0:02:14.836 ********* 2026-03-03 00:59:05.414862 | orchestrator | skipping: [testbed-node-1] 2026-03-03 00:59:05.414870 | orchestrator | skipping: [testbed-node-2] 2026-03-03 00:59:05.414877 | orchestrator | changed: [testbed-node-0] 2026-03-03 00:59:05.414884 | orchestrator | 2026-03-03 00:59:05.414891 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-03 00:59:05.414898 | orchestrator | Tuesday 03 March 2026 00:59:01 +0000 (0:00:00.631) 0:02:15.468 ********* 2026-03-03 00:59:05.414905 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.414912 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.414919 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414926 | orchestrator | 2026-03-03 00:59:05.414933 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-03 00:59:05.414940 | orchestrator | Tuesday 03 March 2026 00:59:02 +0000 (0:00:00.767) 0:02:16.235 ********* 2026-03-03 00:59:05.414947 | orchestrator | ok: [testbed-node-0] 2026-03-03 00:59:05.414954 | orchestrator | ok: [testbed-node-1] 2026-03-03 00:59:05.414961 | orchestrator | ok: [testbed-node-2] 2026-03-03 00:59:05.414968 | orchestrator | 2026-03-03 00:59:05.414975 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 00:59:05.414983 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-03 00:59:05.414991 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-03 00:59:05.414998 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-03 00:59:05.415005 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:59:05.415013 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:59:05.415020 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 00:59:05.415027 | orchestrator | 2026-03-03 00:59:05.415034 | orchestrator | 2026-03-03 00:59:05.415041 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 00:59:05.415049 | orchestrator | Tuesday 03 March 2026 00:59:02 +0000 (0:00:00.891) 0:02:17.127 ********* 2026-03-03 00:59:05.415060 | orchestrator | =============================================================================== 2026-03-03 00:59:05.415068 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.47s 2026-03-03 00:59:05.415081 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.12s 2026-03-03 00:59:05.415089 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.19s 2026-03-03 00:59:05.415096 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.97s 2026-03-03 00:59:05.415102 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.84s 2026-03-03 00:59:05.415108 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.96s 2026-03-03 00:59:05.415115 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.18s 2026-03-03 00:59:05.415121 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.12s 2026-03-03 00:59:05.415127 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.00s 2026-03-03 00:59:05.415134 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.87s 2026-03-03 00:59:05.415140 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.27s 2026-03-03 00:59:05.415146 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.87s 2026-03-03 00:59:05.415159 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.59s 2026-03-03 00:59:05.415166 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.57s 2026-03-03 00:59:05.415173 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.56s 2026-03-03 00:59:05.415180 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2026-03-03 00:59:05.415187 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2026-03-03 00:59:05.415195 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.30s 2026-03-03 00:59:05.415209 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.29s 2026-03-03 00:59:05.415216 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.26s 2026-03-03 00:59:05.415223 | orchestrator | 2026-03-03 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:08.447785 | orchestrator | 2026-03-03 00:59:08 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:08.450025 | orchestrator | 2026-03-03 00:59:08 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:08.450158 | orchestrator | 2026-03-03 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:11.482578 | orchestrator | 2026-03-03 00:59:11 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:11.483068 | orchestrator | 2026-03-03 00:59:11 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:11.483249 | orchestrator | 2026-03-03 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:14.513877 | orchestrator | 2026-03-03 00:59:14 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:14.514868 | orchestrator | 2026-03-03 00:59:14 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:14.514924 | orchestrator | 2026-03-03 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:17.551845 | orchestrator | 2026-03-03 00:59:17 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:17.552002 | orchestrator | 2026-03-03 00:59:17 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:17.552018 | orchestrator | 2026-03-03 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:20.590610 | orchestrator | 2026-03-03 00:59:20 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:20.591719 | orchestrator | 2026-03-03 00:59:20 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:20.591758 | orchestrator | 2026-03-03 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:23.629335 | orchestrator | 2026-03-03 00:59:23 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:23.630496 | orchestrator | 2026-03-03 00:59:23 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:23.631199 | orchestrator | 2026-03-03 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:26.664716 | orchestrator | 2026-03-03 00:59:26 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:26.665631 | orchestrator | 2026-03-03 00:59:26 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:26.665674 | orchestrator | 2026-03-03 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:29.709828 | orchestrator | 2026-03-03 00:59:29 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:29.709941 | orchestrator | 2026-03-03 00:59:29 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:29.709952 | orchestrator | 2026-03-03 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:32.752223 | orchestrator | 2026-03-03 00:59:32 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:32.754167 | orchestrator | 2026-03-03 00:59:32 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:32.754216 | orchestrator | 2026-03-03 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:35.785997 | orchestrator | 2026-03-03 00:59:35 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:35.788193 | orchestrator | 2026-03-03 00:59:35 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:35.788310 | orchestrator | 2026-03-03 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:38.824557 | orchestrator | 2026-03-03 00:59:38 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:38.826677 | orchestrator | 2026-03-03 00:59:38 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:38.826801 | orchestrator | 2026-03-03 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:41.888460 | orchestrator | 2026-03-03 00:59:41 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:41.888518 | orchestrator | 2026-03-03 00:59:41 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:41.888527 | orchestrator | 2026-03-03 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:44.898404 | orchestrator | 2026-03-03 00:59:44 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:44.899675 | orchestrator | 2026-03-03 00:59:44 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:44.899741 | orchestrator | 2026-03-03 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:47.926797 | orchestrator | 2026-03-03 00:59:47 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:47.927404 | orchestrator | 2026-03-03 00:59:47 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:47.927462 | orchestrator | 2026-03-03 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:50.964244 | orchestrator | 2026-03-03 00:59:50 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:50.966284 | orchestrator | 2026-03-03 00:59:50 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:50.966344 | orchestrator | 2026-03-03 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:54.018176 | orchestrator | 2026-03-03 00:59:54 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:54.018226 | orchestrator | 2026-03-03 00:59:54 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:54.018232 | orchestrator | 2026-03-03 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 00:59:57.042473 | orchestrator | 2026-03-03 00:59:57 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 00:59:57.042891 | orchestrator | 2026-03-03 00:59:57 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 00:59:57.042962 | orchestrator | 2026-03-03 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:00.078988 | orchestrator | 2026-03-03 01:00:00 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:00.079256 | orchestrator | 2026-03-03 01:00:00 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:00.079282 | orchestrator | 2026-03-03 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:03.118708 | orchestrator | 2026-03-03 01:00:03 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:03.120472 | orchestrator | 2026-03-03 01:00:03 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:03.120746 | orchestrator | 2026-03-03 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:06.145486 | orchestrator | 2026-03-03 01:00:06 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:06.146103 | orchestrator | 2026-03-03 01:00:06 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:06.146190 | orchestrator | 2026-03-03 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:09.177509 | orchestrator | 2026-03-03 01:00:09 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:09.177561 | orchestrator | 2026-03-03 01:00:09 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:09.177566 | orchestrator | 2026-03-03 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:12.216369 | orchestrator | 2026-03-03 01:00:12 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:12.217447 | orchestrator | 2026-03-03 01:00:12 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:12.217490 | orchestrator | 2026-03-03 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:15.260126 | orchestrator | 2026-03-03 01:00:15 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:15.261735 | orchestrator | 2026-03-03 01:00:15 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:15.261852 | orchestrator | 2026-03-03 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:18.300099 | orchestrator | 2026-03-03 01:00:18 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:18.302160 | orchestrator | 2026-03-03 01:00:18 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:18.302218 | orchestrator | 2026-03-03 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:21.354950 | orchestrator | 2026-03-03 01:00:21 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:21.356232 | orchestrator | 2026-03-03 01:00:21 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:21.356313 | orchestrator | 2026-03-03 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:24.402693 | orchestrator | 2026-03-03 01:00:24 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:24.405783 | orchestrator | 2026-03-03 01:00:24 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:24.405856 | orchestrator | 2026-03-03 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:27.447069 | orchestrator | 2026-03-03 01:00:27 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:27.447909 | orchestrator | 2026-03-03 01:00:27 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:27.447961 | orchestrator | 2026-03-03 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:30.485793 | orchestrator | 2026-03-03 01:00:30 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:30.486497 | orchestrator | 2026-03-03 01:00:30 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:30.486530 | orchestrator | 2026-03-03 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:33.527267 | orchestrator | 2026-03-03 01:00:33 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:33.528174 | orchestrator | 2026-03-03 01:00:33 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:33.528358 | orchestrator | 2026-03-03 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:36.561707 | orchestrator | 2026-03-03 01:00:36 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:36.566375 | orchestrator | 2026-03-03 01:00:36 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:36.567275 | orchestrator | 2026-03-03 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:39.608306 | orchestrator | 2026-03-03 01:00:39 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:39.608953 | orchestrator | 2026-03-03 01:00:39 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:39.608998 | orchestrator | 2026-03-03 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:42.653056 | orchestrator | 2026-03-03 01:00:42 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:42.655346 | orchestrator | 2026-03-03 01:00:42 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:42.655379 | orchestrator | 2026-03-03 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:45.696573 | orchestrator | 2026-03-03 01:00:45 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:45.698548 | orchestrator | 2026-03-03 01:00:45 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:45.700256 | orchestrator | 2026-03-03 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:48.757723 | orchestrator | 2026-03-03 01:00:48 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:48.758994 | orchestrator | 2026-03-03 01:00:48 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:48.759159 | orchestrator | 2026-03-03 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:51.795235 | orchestrator | 2026-03-03 01:00:51 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:51.798767 | orchestrator | 2026-03-03 01:00:51 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:51.798838 | orchestrator | 2026-03-03 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:54.836476 | orchestrator | 2026-03-03 01:00:54 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:54.836532 | orchestrator | 2026-03-03 01:00:54 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:54.836537 | orchestrator | 2026-03-03 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:00:57.866297 | orchestrator | 2026-03-03 01:00:57 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:00:57.867200 | orchestrator | 2026-03-03 01:00:57 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:00:57.867232 | orchestrator | 2026-03-03 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:00.913612 | orchestrator | 2026-03-03 01:01:00 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:00.914589 | orchestrator | 2026-03-03 01:01:00 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:00.914711 | orchestrator | 2026-03-03 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:03.965118 | orchestrator | 2026-03-03 01:01:03 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:03.966789 | orchestrator | 2026-03-03 01:01:03 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:03.966847 | orchestrator | 2026-03-03 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:07.010535 | orchestrator | 2026-03-03 01:01:07 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:07.012530 | orchestrator | 2026-03-03 01:01:07 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:07.012603 | orchestrator | 2026-03-03 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:10.050855 | orchestrator | 2026-03-03 01:01:10 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:10.051551 | orchestrator | 2026-03-03 01:01:10 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:10.051635 | orchestrator | 2026-03-03 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:13.109572 | orchestrator | 2026-03-03 01:01:13 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:13.111927 | orchestrator | 2026-03-03 01:01:13 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:13.112006 | orchestrator | 2026-03-03 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:16.166374 | orchestrator | 2026-03-03 01:01:16 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:16.168072 | orchestrator | 2026-03-03 01:01:16 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:16.168115 | orchestrator | 2026-03-03 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:19.205396 | orchestrator | 2026-03-03 01:01:19 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:19.205960 | orchestrator | 2026-03-03 01:01:19 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:19.205976 | orchestrator | 2026-03-03 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:22.258344 | orchestrator | 2026-03-03 01:01:22 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:22.262469 | orchestrator | 2026-03-03 01:01:22 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:22.262643 | orchestrator | 2026-03-03 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:25.307803 | orchestrator | 2026-03-03 01:01:25 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:25.310196 | orchestrator | 2026-03-03 01:01:25 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:25.310322 | orchestrator | 2026-03-03 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:28.359853 | orchestrator | 2026-03-03 01:01:28 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:28.361643 | orchestrator | 2026-03-03 01:01:28 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:28.361700 | orchestrator | 2026-03-03 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:31.400957 | orchestrator | 2026-03-03 01:01:31 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:31.403253 | orchestrator | 2026-03-03 01:01:31 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:31.403289 | orchestrator | 2026-03-03 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:34.432028 | orchestrator | 2026-03-03 01:01:34 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:34.434651 | orchestrator | 2026-03-03 01:01:34 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:34.434699 | orchestrator | 2026-03-03 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:37.470293 | orchestrator | 2026-03-03 01:01:37 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:37.471925 | orchestrator | 2026-03-03 01:01:37 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:37.472010 | orchestrator | 2026-03-03 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:40.514532 | orchestrator | 2026-03-03 01:01:40 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:40.514577 | orchestrator | 2026-03-03 01:01:40 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:40.514582 | orchestrator | 2026-03-03 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:43.549311 | orchestrator | 2026-03-03 01:01:43 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:43.551860 | orchestrator | 2026-03-03 01:01:43 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:43.552305 | orchestrator | 2026-03-03 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:46.591840 | orchestrator | 2026-03-03 01:01:46 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:46.595908 | orchestrator | 2026-03-03 01:01:46 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state STARTED 2026-03-03 01:01:46.596000 | orchestrator | 2026-03-03 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:49.655837 | orchestrator | 2026-03-03 01:01:49 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:49.664071 | orchestrator | 2026-03-03 01:01:49 | INFO  | Task 87853859-c324-4030-b05e-069aea08731f is in state SUCCESS 2026-03-03 01:01:49.666313 | orchestrator | 2026-03-03 01:01:49.666451 | orchestrator | 2026-03-03 01:01:49.666465 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:01:49.666474 | orchestrator | 2026-03-03 01:01:49.666480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:01:49.666488 | orchestrator | Tuesday 03 March 2026 00:55:39 +0000 (0:00:00.280) 0:00:00.280 ********* 2026-03-03 01:01:49.666495 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.666504 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.666511 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.666518 | orchestrator | 2026-03-03 01:01:49.666526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:01:49.666533 | orchestrator | Tuesday 03 March 2026 00:55:39 +0000 (0:00:00.261) 0:00:00.542 ********* 2026-03-03 01:01:49.666567 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-03 01:01:49.666576 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-03 01:01:49.666597 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-03 01:01:49.666603 | orchestrator | 2026-03-03 01:01:49.666609 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-03 01:01:49.666616 | orchestrator | 2026-03-03 01:01:49.666623 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-03 01:01:49.666628 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.592) 0:00:01.134 ********* 2026-03-03 01:01:49.666635 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.666643 | orchestrator | 2026-03-03 01:01:49.666647 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-03 01:01:49.666651 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.758) 0:00:01.892 ********* 2026-03-03 01:01:49.666655 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.666658 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.666662 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.666666 | orchestrator | 2026-03-03 01:01:49.666670 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-03 01:01:49.666674 | orchestrator | Tuesday 03 March 2026 00:55:41 +0000 (0:00:00.789) 0:00:02.682 ********* 2026-03-03 01:01:49.666678 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.666682 | orchestrator | 2026-03-03 01:01:49.666686 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-03 01:01:49.666690 | orchestrator | Tuesday 03 March 2026 00:55:42 +0000 (0:00:00.808) 0:00:03.491 ********* 2026-03-03 01:01:49.666693 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.666697 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.666774 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.666779 | orchestrator | 2026-03-03 01:01:49.666783 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-03 01:01:49.666787 | orchestrator | Tuesday 03 March 2026 00:55:43 +0000 (0:00:00.761) 0:00:04.252 ********* 2026-03-03 01:01:49.666791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-03 01:01:49.666795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-03 01:01:49.666801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-03 01:01:49.666807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-03 01:01:49.666813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-03 01:01:49.666819 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-03 01:01:49.666826 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-03 01:01:49.666832 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-03 01:01:49.666838 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-03 01:01:49.666845 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-03 01:01:49.666851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-03 01:01:49.666857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-03 01:01:49.666864 | orchestrator | 2026-03-03 01:01:49.666871 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-03 01:01:49.666878 | orchestrator | Tuesday 03 March 2026 00:55:47 +0000 (0:00:04.344) 0:00:08.596 ********* 2026-03-03 01:01:49.666892 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-03 01:01:49.667019 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-03 01:01:49.667034 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-03 01:01:49.667040 | orchestrator | 2026-03-03 01:01:49.667047 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-03 01:01:49.667053 | orchestrator | Tuesday 03 March 2026 00:55:48 +0000 (0:00:00.760) 0:00:09.357 ********* 2026-03-03 01:01:49.667059 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-03 01:01:49.667066 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-03 01:01:49.667074 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-03 01:01:49.667081 | orchestrator | 2026-03-03 01:01:49.667087 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-03 01:01:49.667094 | orchestrator | Tuesday 03 March 2026 00:55:49 +0000 (0:00:01.311) 0:00:10.668 ********* 2026-03-03 01:01:49.667101 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-03 01:01:49.667108 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.667133 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-03 01:01:49.667140 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.667146 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-03 01:01:49.667153 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.667160 | orchestrator | 2026-03-03 01:01:49.667166 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-03 01:01:49.667172 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.821) 0:00:11.490 ********* 2026-03-03 01:01:49.667192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.667269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.667275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.667281 | orchestrator | 2026-03-03 01:01:49.667288 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-03 01:01:49.667295 | orchestrator | Tuesday 03 March 2026 00:55:52 +0000 (0:00:01.917) 0:00:13.408 ********* 2026-03-03 01:01:49.667302 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.667308 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.667315 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.667321 | orchestrator | 2026-03-03 01:01:49.667328 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-03 01:01:49.667354 | orchestrator | Tuesday 03 March 2026 00:55:53 +0000 (0:00:01.241) 0:00:14.649 ********* 2026-03-03 01:01:49.667360 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-03 01:01:49.667366 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-03 01:01:49.667371 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-03 01:01:49.667384 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-03 01:01:49.667389 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-03 01:01:49.667395 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-03 01:01:49.667400 | orchestrator | 2026-03-03 01:01:49.667407 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-03 01:01:49.667413 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:02.666) 0:00:17.319 ********* 2026-03-03 01:01:49.667419 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.667425 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.667431 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.667437 | orchestrator | 2026-03-03 01:01:49.667443 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-03 01:01:49.667449 | orchestrator | Tuesday 03 March 2026 00:55:58 +0000 (0:00:02.066) 0:00:19.385 ********* 2026-03-03 01:01:49.667455 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.667535 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.667541 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.667548 | orchestrator | 2026-03-03 01:01:49.667554 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-03 01:01:49.667561 | orchestrator | Tuesday 03 March 2026 00:56:00 +0000 (0:00:02.068) 0:00:21.454 ********* 2026-03-03 01:01:49.667592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.667621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.667731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.667743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.667773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-03 01:01:49.667789 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.667795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.667802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.667808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-03 01:01:49.667815 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.667832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.667844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.667851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.667863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-03 01:01:49.667870 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.667895 | orchestrator | 2026-03-03 01:01:49.667902 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-03 01:01:49.667909 | orchestrator | Tuesday 03 March 2026 00:56:01 +0000 (0:00:00.735) 0:00:22.190 ********* 2026-03-03 01:01:49.667916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.667990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.668031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-03 01:01:49.668037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.668050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-03 01:01:49.668064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.668090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155', '__omit_place_holder__faa06fe4922c8244aa8bb4f6ed323457b761e155'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-03 01:01:49.668097 | orchestrator | 2026-03-03 01:01:49.668103 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-03 01:01:49.668110 | orchestrator | Tuesday 03 March 2026 00:56:04 +0000 (0:00:02.897) 0:00:25.087 ********* 2026-03-03 01:01:49.668117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.668164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.668169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.668173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.668177 | orchestrator | 2026-03-03 01:01:49.668180 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-03 01:01:49.668184 | orchestrator | Tuesday 03 March 2026 00:56:08 +0000 (0:00:04.022) 0:00:29.110 ********* 2026-03-03 01:01:49.668189 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-03 01:01:49.668193 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-03 01:01:49.668197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-03 01:01:49.668201 | orchestrator | 2026-03-03 01:01:49.668205 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-03 01:01:49.668208 | orchestrator | Tuesday 03 March 2026 00:56:11 +0000 (0:00:02.910) 0:00:32.021 ********* 2026-03-03 01:01:49.668212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-03 01:01:49.668218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-03 01:01:49.668224 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-03 01:01:49.668238 | orchestrator | 2026-03-03 01:01:49.670203 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-03 01:01:49.670274 | orchestrator | Tuesday 03 March 2026 00:56:13 +0000 (0:00:02.845) 0:00:34.866 ********* 2026-03-03 01:01:49.670280 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.670286 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.670290 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.670294 | orchestrator | 2026-03-03 01:01:49.670298 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-03 01:01:49.670303 | orchestrator | Tuesday 03 March 2026 00:56:14 +0000 (0:00:00.790) 0:00:35.656 ********* 2026-03-03 01:01:49.670307 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-03 01:01:49.670313 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-03 01:01:49.670329 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-03 01:01:49.670334 | orchestrator | 2026-03-03 01:01:49.670445 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-03 01:01:49.670449 | orchestrator | Tuesday 03 March 2026 00:56:17 +0000 (0:00:02.523) 0:00:38.180 ********* 2026-03-03 01:01:49.670453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-03 01:01:49.670457 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-03 01:01:49.670461 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-03 01:01:49.670465 | orchestrator | 2026-03-03 01:01:49.670469 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-03 01:01:49.670473 | orchestrator | Tuesday 03 March 2026 00:56:20 +0000 (0:00:02.881) 0:00:41.062 ********* 2026-03-03 01:01:49.670477 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-03 01:01:49.670481 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-03 01:01:49.670485 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-03 01:01:49.670489 | orchestrator | 2026-03-03 01:01:49.670493 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-03 01:01:49.670497 | orchestrator | Tuesday 03 March 2026 00:56:22 +0000 (0:00:02.370) 0:00:43.433 ********* 2026-03-03 01:01:49.670500 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-03 01:01:49.670504 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-03 01:01:49.670508 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-03 01:01:49.670512 | orchestrator | 2026-03-03 01:01:49.670516 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-03 01:01:49.670520 | orchestrator | Tuesday 03 March 2026 00:56:24 +0000 (0:00:02.276) 0:00:45.709 ********* 2026-03-03 01:01:49.670524 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.670528 | orchestrator | 2026-03-03 01:01:49.670531 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-03 01:01:49.670535 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:00.936) 0:00:46.646 ********* 2026-03-03 01:01:49.670541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.670563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.670578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.670583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.670587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.670613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.670621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.670628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.670640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.670647 | orchestrator | 2026-03-03 01:01:49.670654 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-03 01:01:49.670660 | orchestrator | Tuesday 03 March 2026 00:56:28 +0000 (0:00:03.346) 0:00:49.992 ********* 2026-03-03 01:01:49.670671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670695 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.670701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670725 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.670731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670757 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.670763 | orchestrator | 2026-03-03 01:01:49.670769 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-03 01:01:49.670775 | orchestrator | Tuesday 03 March 2026 00:56:29 +0000 (0:00:00.850) 0:00:50.842 ********* 2026-03-03 01:01:49.670781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670806 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.670812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670853 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.670857 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.670861 | orchestrator | 2026-03-03 01:01:49.670865 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-03 01:01:49.670869 | orchestrator | Tuesday 03 March 2026 00:56:30 +0000 (0:00:00.716) 0:00:51.559 ********* 2026-03-03 01:01:49.670873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670901 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.670908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670924 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.670928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670943 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.670947 | orchestrator | 2026-03-03 01:01:49.670951 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-03 01:01:49.670955 | orchestrator | Tuesday 03 March 2026 00:56:31 +0000 (0:00:01.344) 0:00:52.903 ********* 2026-03-03 01:01:49.670961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670977 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.670981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.670985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.670989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.670993 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671021 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671025 | orchestrator | 2026-03-03 01:01:49.671029 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-03 01:01:49.671033 | orchestrator | Tuesday 03 March 2026 00:56:32 +0000 (0:00:01.009) 0:00:53.913 ********* 2026-03-03 01:01:49.671037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671054 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671097 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671120 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671126 | orchestrator | 2026-03-03 01:01:49.671132 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-03 01:01:49.671138 | orchestrator | Tuesday 03 March 2026 00:56:34 +0000 (0:00:02.052) 0:00:55.965 ********* 2026-03-03 01:01:49.671144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671172 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671189 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671211 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671215 | orchestrator | 2026-03-03 01:01:49.671219 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-03 01:01:49.671225 | orchestrator | Tuesday 03 March 2026 00:56:35 +0000 (0:00:00.736) 0:00:56.702 ********* 2026-03-03 01:01:49.671229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671241 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671266 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671285 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671289 | orchestrator | 2026-03-03 01:01:49.671293 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-03 01:01:49.671297 | orchestrator | Tuesday 03 March 2026 00:56:36 +0000 (0:00:00.516) 0:00:57.219 ********* 2026-03-03 01:01:49.671301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671316 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-03 01:01:49.671366 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-03 01:01:49.671374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-03 01:01:49.671382 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671386 | orchestrator | 2026-03-03 01:01:49.671390 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-03 01:01:49.671394 | orchestrator | Tuesday 03 March 2026 00:56:36 +0000 (0:00:00.589) 0:00:57.808 ********* 2026-03-03 01:01:49.671397 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-03 01:01:49.671402 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-03 01:01:49.671409 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-03 01:01:49.671413 | orchestrator | 2026-03-03 01:01:49.671417 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-03 01:01:49.671420 | orchestrator | Tuesday 03 March 2026 00:56:38 +0000 (0:00:01.579) 0:00:59.388 ********* 2026-03-03 01:01:49.671424 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-03 01:01:49.671429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-03 01:01:49.671433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-03 01:01:49.671436 | orchestrator | 2026-03-03 01:01:49.671440 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-03 01:01:49.671444 | orchestrator | Tuesday 03 March 2026 00:56:40 +0000 (0:00:01.981) 0:01:01.369 ********* 2026-03-03 01:01:49.671451 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-03 01:01:49.671455 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-03 01:01:49.671459 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-03 01:01:49.671463 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-03 01:01:49.671467 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671470 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-03 01:01:49.671474 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671478 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-03 01:01:49.671482 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671486 | orchestrator | 2026-03-03 01:01:49.671490 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-03 01:01:49.671494 | orchestrator | Tuesday 03 March 2026 00:56:41 +0000 (0:00:00.920) 0:01:02.290 ********* 2026-03-03 01:01:49.671498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.671502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.671510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-03 01:01:49.671518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.671522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.671530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-03 01:01:49.671534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.671538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.671542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-03 01:01:49.671553 | orchestrator | 2026-03-03 01:01:49.671562 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-03 01:01:49.671570 | orchestrator | Tuesday 03 March 2026 00:56:44 +0000 (0:00:03.213) 0:01:05.504 ********* 2026-03-03 01:01:49.671576 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.671581 | orchestrator | 2026-03-03 01:01:49.671587 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-03 01:01:49.671593 | orchestrator | Tuesday 03 March 2026 00:56:45 +0000 (0:00:00.710) 0:01:06.214 ********* 2026-03-03 01:01:49.671601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-03 01:01:49.671614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.671625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-03 01:01:49.671643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.671647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-03 01:01:49.671665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.671669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671680 | orchestrator | 2026-03-03 01:01:49.671684 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-03 01:01:49.671688 | orchestrator | Tuesday 03 March 2026 00:56:50 +0000 (0:00:04.844) 0:01:11.058 ********* 2026-03-03 01:01:49.671693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-03 01:01:49.671704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.671714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671727 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-03 01:01:49.671744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.671749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671757 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-03 01:01:49.671771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.671778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671786 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671790 | orchestrator | 2026-03-03 01:01:49.671794 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-03 01:01:49.671800 | orchestrator | Tuesday 03 March 2026 00:56:51 +0000 (0:00:01.641) 0:01:12.699 ********* 2026-03-03 01:01:49.671806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-03 01:01:49.671814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-03 01:01:49.671820 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.671826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-03 01:01:49.671832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-03 01:01:49.671838 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.671844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-03 01:01:49.671849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-03 01:01:49.671855 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.671861 | orchestrator | 2026-03-03 01:01:49.671871 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-03 01:01:49.671877 | orchestrator | Tuesday 03 March 2026 00:56:52 +0000 (0:00:00.995) 0:01:13.695 ********* 2026-03-03 01:01:49.671884 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.671890 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.671897 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.671903 | orchestrator | 2026-03-03 01:01:49.671909 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-03 01:01:49.671915 | orchestrator | Tuesday 03 March 2026 00:56:54 +0000 (0:00:01.359) 0:01:15.054 ********* 2026-03-03 01:01:49.671919 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.671922 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.671926 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.671930 | orchestrator | 2026-03-03 01:01:49.671938 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-03 01:01:49.671942 | orchestrator | Tuesday 03 March 2026 00:56:56 +0000 (0:00:02.214) 0:01:17.268 ********* 2026-03-03 01:01:49.671949 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.671953 | orchestrator | 2026-03-03 01:01:49.671957 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-03 01:01:49.671961 | orchestrator | Tuesday 03 March 2026 00:56:56 +0000 (0:00:00.693) 0:01:17.962 ********* 2026-03-03 01:01:49.671965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.671970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.671979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.672001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.672009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.672013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.672017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.672021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.672025 | orchestrator | 2026-03-03 01:01:49.672029 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-03 01:01:49.672033 | orchestrator | Tuesday 03 March 2026 00:57:01 +0000 (0:00:04.236) 0:01:22.198 ********* 2026-03-03 01:01:49.674185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.674273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674286 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.674296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674304 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.674349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674359 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674363 | orchestrator | 2026-03-03 01:01:49.674367 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-03 01:01:49.674372 | orchestrator | Tuesday 03 March 2026 00:57:02 +0000 (0:00:01.613) 0:01:23.812 ********* 2026-03-03 01:01:49.674379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-03 01:01:49.674387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-03 01:01:49.674397 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-03 01:01:49.674409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-03 01:01:49.674416 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-03 01:01:49.674438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-03 01:01:49.674456 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674463 | orchestrator | 2026-03-03 01:01:49.674469 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-03 01:01:49.674475 | orchestrator | Tuesday 03 March 2026 00:57:03 +0000 (0:00:01.109) 0:01:24.921 ********* 2026-03-03 01:01:49.674481 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.674487 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.674493 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.674498 | orchestrator | 2026-03-03 01:01:49.674503 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-03 01:01:49.674510 | orchestrator | Tuesday 03 March 2026 00:57:05 +0000 (0:00:01.319) 0:01:26.241 ********* 2026-03-03 01:01:49.674517 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.674522 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.674528 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.674534 | orchestrator | 2026-03-03 01:01:49.674546 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-03 01:01:49.674552 | orchestrator | Tuesday 03 March 2026 00:57:07 +0000 (0:00:01.903) 0:01:28.145 ********* 2026-03-03 01:01:49.674558 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674564 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674570 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674575 | orchestrator | 2026-03-03 01:01:49.674581 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-03 01:01:49.674587 | orchestrator | Tuesday 03 March 2026 00:57:07 +0000 (0:00:00.318) 0:01:28.463 ********* 2026-03-03 01:01:49.674593 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.674599 | orchestrator | 2026-03-03 01:01:49.674604 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-03 01:01:49.674610 | orchestrator | Tuesday 03 March 2026 00:57:08 +0000 (0:00:00.835) 0:01:29.299 ********* 2026-03-03 01:01:49.674622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-03 01:01:49.674632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-03 01:01:49.674638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-03 01:01:49.674649 | orchestrator | 2026-03-03 01:01:49.674655 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-03 01:01:49.674661 | orchestrator | Tuesday 03 March 2026 00:57:11 +0000 (0:00:03.027) 0:01:32.327 ********* 2026-03-03 01:01:49.674671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-03 01:01:49.674678 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-03 01:01:49.674693 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-03 01:01:49.674705 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674711 | orchestrator | 2026-03-03 01:01:49.674716 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-03 01:01:49.674722 | orchestrator | Tuesday 03 March 2026 00:57:12 +0000 (0:00:01.519) 0:01:33.847 ********* 2026-03-03 01:01:49.674729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-03 01:01:49.674742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-03 01:01:49.674749 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-03 01:01:49.674761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-03 01:01:49.674767 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-03 01:01:49.674783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-03 01:01:49.674789 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674795 | orchestrator | 2026-03-03 01:01:49.674801 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-03 01:01:49.674810 | orchestrator | Tuesday 03 March 2026 00:57:14 +0000 (0:00:01.875) 0:01:35.722 ********* 2026-03-03 01:01:49.674816 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674822 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674828 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674834 | orchestrator | 2026-03-03 01:01:49.674840 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-03 01:01:49.674846 | orchestrator | Tuesday 03 March 2026 00:57:15 +0000 (0:00:00.804) 0:01:36.526 ********* 2026-03-03 01:01:49.674852 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.674858 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.674864 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.674870 | orchestrator | 2026-03-03 01:01:49.674875 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-03 01:01:49.674881 | orchestrator | Tuesday 03 March 2026 00:57:16 +0000 (0:00:01.231) 0:01:37.757 ********* 2026-03-03 01:01:49.674887 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.674893 | orchestrator | 2026-03-03 01:01:49.674899 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-03 01:01:49.674908 | orchestrator | Tuesday 03 March 2026 00:57:17 +0000 (0:00:00.711) 0:01:38.469 ********* 2026-03-03 01:01:49.674915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.674921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.674949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.674966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.674999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675019 | orchestrator | 2026-03-03 01:01:49.675029 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-03 01:01:49.675039 | orchestrator | Tuesday 03 March 2026 00:57:20 +0000 (0:00:03.365) 0:01:41.834 ********* 2026-03-03 01:01:49.675045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.675051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675084 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.675096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675115 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.675140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675159 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675167 | orchestrator | 2026-03-03 01:01:49.675172 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-03 01:01:49.675178 | orchestrator | Tuesday 03 March 2026 00:57:21 +0000 (0:00:00.984) 0:01:42.818 ********* 2026-03-03 01:01:49.675184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-03 01:01:49.675189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-03 01:01:49.675196 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-03 01:01:49.675208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-03 01:01:49.675214 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-03 01:01:49.675229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-03 01:01:49.675235 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675246 | orchestrator | 2026-03-03 01:01:49.675252 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-03 01:01:49.675258 | orchestrator | Tuesday 03 March 2026 00:57:22 +0000 (0:00:01.135) 0:01:43.954 ********* 2026-03-03 01:01:49.675264 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.675271 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.675277 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.675281 | orchestrator | 2026-03-03 01:01:49.675285 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-03 01:01:49.675292 | orchestrator | Tuesday 03 March 2026 00:57:24 +0000 (0:00:01.327) 0:01:45.281 ********* 2026-03-03 01:01:49.675296 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.675300 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.675304 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.675308 | orchestrator | 2026-03-03 01:01:49.675311 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-03 01:01:49.675317 | orchestrator | Tuesday 03 March 2026 00:57:26 +0000 (0:00:02.670) 0:01:47.952 ********* 2026-03-03 01:01:49.675323 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675329 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675420 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675430 | orchestrator | 2026-03-03 01:01:49.675436 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-03 01:01:49.675442 | orchestrator | Tuesday 03 March 2026 00:57:27 +0000 (0:00:00.780) 0:01:48.733 ********* 2026-03-03 01:01:49.675447 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675453 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675459 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675465 | orchestrator | 2026-03-03 01:01:49.675472 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-03 01:01:49.675478 | orchestrator | Tuesday 03 March 2026 00:57:28 +0000 (0:00:00.351) 0:01:49.085 ********* 2026-03-03 01:01:49.675483 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.675490 | orchestrator | 2026-03-03 01:01:49.675496 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-03 01:01:49.675502 | orchestrator | Tuesday 03 March 2026 00:57:28 +0000 (0:00:00.751) 0:01:49.836 ********* 2026-03-03 01:01:49.675509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:01:49.675517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:01:49.675530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:01:49.675544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:01:49.675570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:01:49.675622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:01:49.675630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675656 | orchestrator | 2026-03-03 01:01:49.675660 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-03 01:01:49.675664 | orchestrator | Tuesday 03 March 2026 00:57:32 +0000 (0:00:03.549) 0:01:53.386 ********* 2026-03-03 01:01:49.675668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:01:49.675676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:01:49.675683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:01:49.675714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:01:49.675725 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675752 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:01:49.675766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:01:49.675770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.675797 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675800 | orchestrator | 2026-03-03 01:01:49.675804 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-03 01:01:49.675808 | orchestrator | Tuesday 03 March 2026 00:57:33 +0000 (0:00:00.837) 0:01:54.223 ********* 2026-03-03 01:01:49.675813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-03 01:01:49.675820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-03 01:01:49.675825 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-03 01:01:49.675833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-03 01:01:49.675837 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-03 01:01:49.675844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-03 01:01:49.675848 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675855 | orchestrator | 2026-03-03 01:01:49.675859 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-03 01:01:49.675863 | orchestrator | Tuesday 03 March 2026 00:57:34 +0000 (0:00:01.061) 0:01:55.285 ********* 2026-03-03 01:01:49.675867 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.675871 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.675875 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.675879 | orchestrator | 2026-03-03 01:01:49.675883 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-03 01:01:49.675887 | orchestrator | Tuesday 03 March 2026 00:57:36 +0000 (0:00:01.729) 0:01:57.015 ********* 2026-03-03 01:01:49.675891 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.675894 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.675898 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.675902 | orchestrator | 2026-03-03 01:01:49.675906 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-03 01:01:49.675910 | orchestrator | Tuesday 03 March 2026 00:57:37 +0000 (0:00:01.823) 0:01:58.839 ********* 2026-03-03 01:01:49.675914 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.675918 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.675922 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.675925 | orchestrator | 2026-03-03 01:01:49.675929 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-03 01:01:49.675933 | orchestrator | Tuesday 03 March 2026 00:57:38 +0000 (0:00:00.505) 0:01:59.344 ********* 2026-03-03 01:01:49.675937 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.675941 | orchestrator | 2026-03-03 01:01:49.675945 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-03 01:01:49.675949 | orchestrator | Tuesday 03 March 2026 00:57:39 +0000 (0:00:00.785) 0:02:00.130 ********* 2026-03-03 01:01:49.676074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:01:49.676099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.676109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:01:49.676119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.676127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:01:49.676136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.676143 | orchestrator | 2026-03-03 01:01:49.676147 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-03 01:01:49.676152 | orchestrator | Tuesday 03 March 2026 00:57:43 +0000 (0:00:04.751) 0:02:04.881 ********* 2026-03-03 01:01:49.676156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:01:49.676163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:01:49.676185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.676190 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.676201 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:01:49.676220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.676224 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676228 | orchestrator | 2026-03-03 01:01:49.676232 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-03 01:01:49.676236 | orchestrator | Tuesday 03 March 2026 00:57:47 +0000 (0:00:04.011) 0:02:08.892 ********* 2026-03-03 01:01:49.676243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-03 01:01:49.676250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-03 01:01:49.676254 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-03 01:01:49.676262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-03 01:01:49.676266 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-03 01:01:49.676274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-03 01:01:49.676278 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676282 | orchestrator | 2026-03-03 01:01:49.676285 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-03 01:01:49.676289 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:05.015) 0:02:13.908 ********* 2026-03-03 01:01:49.676293 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.676297 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.676301 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.676305 | orchestrator | 2026-03-03 01:01:49.676309 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-03 01:01:49.676312 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:01.410) 0:02:15.318 ********* 2026-03-03 01:01:49.676316 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.676324 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.676328 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.676332 | orchestrator | 2026-03-03 01:01:49.676356 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-03 01:01:49.676362 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:02.013) 0:02:17.332 ********* 2026-03-03 01:01:49.676366 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676370 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676374 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676378 | orchestrator | 2026-03-03 01:01:49.676382 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-03 01:01:49.676385 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.429) 0:02:17.761 ********* 2026-03-03 01:01:49.676389 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.676393 | orchestrator | 2026-03-03 01:01:49.676397 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-03 01:01:49.676401 | orchestrator | Tuesday 03 March 2026 00:57:57 +0000 (0:00:00.694) 0:02:18.456 ********* 2026-03-03 01:01:49.676407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:01:49.676412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:01:49.676416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:01:49.676420 | orchestrator | 2026-03-03 01:01:49.676424 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-03 01:01:49.676428 | orchestrator | Tuesday 03 March 2026 00:58:01 +0000 (0:00:04.211) 0:02:22.667 ********* 2026-03-03 01:01:49.676432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:01:49.676438 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:01:49.676449 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:01:49.676459 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676463 | orchestrator | 2026-03-03 01:01:49.676467 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-03 01:01:49.676471 | orchestrator | Tuesday 03 March 2026 00:58:02 +0000 (0:00:01.058) 0:02:23.726 ********* 2026-03-03 01:01:49.676475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-03 01:01:49.676480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-03 01:01:49.676484 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-03 01:01:49.676492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-03 01:01:49.676496 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-03 01:01:49.676503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-03 01:01:49.676507 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676511 | orchestrator | 2026-03-03 01:01:49.676515 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-03 01:01:49.676519 | orchestrator | Tuesday 03 March 2026 00:58:03 +0000 (0:00:01.033) 0:02:24.760 ********* 2026-03-03 01:01:49.676522 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.676526 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.676530 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.676534 | orchestrator | 2026-03-03 01:01:49.676541 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-03 01:01:49.676544 | orchestrator | Tuesday 03 March 2026 00:58:05 +0000 (0:00:01.969) 0:02:26.730 ********* 2026-03-03 01:01:49.676548 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.676552 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.676556 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.676560 | orchestrator | 2026-03-03 01:01:49.676564 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-03 01:01:49.676568 | orchestrator | Tuesday 03 March 2026 00:58:08 +0000 (0:00:02.821) 0:02:29.551 ********* 2026-03-03 01:01:49.676574 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676580 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676586 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676591 | orchestrator | 2026-03-03 01:01:49.676597 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-03 01:01:49.676603 | orchestrator | Tuesday 03 March 2026 00:58:09 +0000 (0:00:00.556) 0:02:30.108 ********* 2026-03-03 01:01:49.676609 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.676614 | orchestrator | 2026-03-03 01:01:49.676620 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-03 01:01:49.676626 | orchestrator | Tuesday 03 March 2026 00:58:10 +0000 (0:00:01.082) 0:02:31.190 ********* 2026-03-03 01:01:49.676640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:01:49.676648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:01:49.676669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:01:49.676676 | orchestrator | 2026-03-03 01:01:49.676682 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-03 01:01:49.676693 | orchestrator | Tuesday 03 March 2026 00:58:14 +0000 (0:00:04.683) 0:02:35.873 ********* 2026-03-03 01:01:49.676703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:01:49.676710 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:01:49.676732 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:01:49.676751 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676757 | orchestrator | 2026-03-03 01:01:49.676764 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-03 01:01:49.676770 | orchestrator | Tuesday 03 March 2026 00:58:16 +0000 (0:00:01.672) 0:02:37.546 ********* 2026-03-03 01:01:49.676780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-03 01:01:49.676787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-03 01:01:49.676793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-03 01:01:49.676803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-03 01:01:49.676809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-03 01:01:49.676814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-03 01:01:49.676819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-03 01:01:49.676824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-03 01:01:49.676829 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-03 01:01:49.676838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-03 01:01:49.676842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-03 01:01:49.676847 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-03 01:01:49.676859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-03 01:01:49.676866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-03 01:01:49.676870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-03 01:01:49.676875 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676879 | orchestrator | 2026-03-03 01:01:49.676886 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-03 01:01:49.676890 | orchestrator | Tuesday 03 March 2026 00:58:17 +0000 (0:00:00.920) 0:02:38.467 ********* 2026-03-03 01:01:49.676895 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.676899 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.676904 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.676909 | orchestrator | 2026-03-03 01:01:49.676914 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-03 01:01:49.676918 | orchestrator | Tuesday 03 March 2026 00:58:18 +0000 (0:00:01.318) 0:02:39.785 ********* 2026-03-03 01:01:49.676922 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.676927 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.676931 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.676935 | orchestrator | 2026-03-03 01:01:49.676938 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-03 01:01:49.676942 | orchestrator | Tuesday 03 March 2026 00:58:20 +0000 (0:00:01.940) 0:02:41.725 ********* 2026-03-03 01:01:49.676946 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676950 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676954 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676958 | orchestrator | 2026-03-03 01:01:49.676961 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-03 01:01:49.676965 | orchestrator | Tuesday 03 March 2026 00:58:20 +0000 (0:00:00.265) 0:02:41.991 ********* 2026-03-03 01:01:49.676969 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.676973 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.676977 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.676981 | orchestrator | 2026-03-03 01:01:49.676984 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-03 01:01:49.676988 | orchestrator | Tuesday 03 March 2026 00:58:21 +0000 (0:00:00.420) 0:02:42.411 ********* 2026-03-03 01:01:49.676992 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.676996 | orchestrator | 2026-03-03 01:01:49.676999 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-03 01:01:49.677003 | orchestrator | Tuesday 03 March 2026 00:58:22 +0000 (0:00:00.857) 0:02:43.269 ********* 2026-03-03 01:01:49.677007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:01:49.677014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:01:49.677021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:01:49.677029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:01:49.677034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:01:49.677038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:01:49.677042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:01:49.677049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:01:49.677059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:01:49.677063 | orchestrator | 2026-03-03 01:01:49.677067 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-03 01:01:49.677071 | orchestrator | Tuesday 03 March 2026 00:58:25 +0000 (0:00:03.464) 0:02:46.733 ********* 2026-03-03 01:01:49.677075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:01:49.677080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:01:49.677084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:01:49.677097 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:01:49.677120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:01:49.677125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:01:49.677129 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:01:49.677137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:01:49.677141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:01:49.677149 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677153 | orchestrator | 2026-03-03 01:01:49.677157 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-03 01:01:49.677241 | orchestrator | Tuesday 03 March 2026 00:58:26 +0000 (0:00:00.532) 0:02:47.265 ********* 2026-03-03 01:01:49.677249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-03 01:01:49.677255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-03 01:01:49.677259 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-03 01:01:49.677270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-03 01:01:49.677274 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-03 01:01:49.677282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-03 01:01:49.677286 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677290 | orchestrator | 2026-03-03 01:01:49.677294 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-03 01:01:49.677298 | orchestrator | Tuesday 03 March 2026 00:58:26 +0000 (0:00:00.723) 0:02:47.989 ********* 2026-03-03 01:01:49.677302 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.677306 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.677310 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.677314 | orchestrator | 2026-03-03 01:01:49.677317 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-03 01:01:49.677321 | orchestrator | Tuesday 03 March 2026 00:58:28 +0000 (0:00:01.343) 0:02:49.333 ********* 2026-03-03 01:01:49.677325 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.677329 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.677333 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.677362 | orchestrator | 2026-03-03 01:01:49.677368 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-03 01:01:49.677374 | orchestrator | Tuesday 03 March 2026 00:58:30 +0000 (0:00:01.795) 0:02:51.129 ********* 2026-03-03 01:01:49.677380 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677386 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677393 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677398 | orchestrator | 2026-03-03 01:01:49.677405 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-03 01:01:49.677418 | orchestrator | Tuesday 03 March 2026 00:58:30 +0000 (0:00:00.440) 0:02:51.569 ********* 2026-03-03 01:01:49.677424 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.677428 | orchestrator | 2026-03-03 01:01:49.677432 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-03 01:01:49.677436 | orchestrator | Tuesday 03 March 2026 00:58:31 +0000 (0:00:00.915) 0:02:52.485 ********* 2026-03-03 01:01:49.677440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:01:49.677449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:01:49.677462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:01:49.677475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677479 | orchestrator | 2026-03-03 01:01:49.677483 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-03 01:01:49.677488 | orchestrator | Tuesday 03 March 2026 00:58:34 +0000 (0:00:03.410) 0:02:55.896 ********* 2026-03-03 01:01:49.677500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:01:49.677504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677508 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:01:49.677520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677524 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:01:49.677537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677541 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677545 | orchestrator | 2026-03-03 01:01:49.677549 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-03 01:01:49.677553 | orchestrator | Tuesday 03 March 2026 00:58:35 +0000 (0:00:01.005) 0:02:56.901 ********* 2026-03-03 01:01:49.677557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-03 01:01:49.677563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-03 01:01:49.677567 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-03 01:01:49.677575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-03 01:01:49.677582 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-03 01:01:49.677590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-03 01:01:49.677593 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677597 | orchestrator | 2026-03-03 01:01:49.677601 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-03 01:01:49.677605 | orchestrator | Tuesday 03 March 2026 00:58:36 +0000 (0:00:00.864) 0:02:57.765 ********* 2026-03-03 01:01:49.677609 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.677613 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.677617 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.677620 | orchestrator | 2026-03-03 01:01:49.677624 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-03 01:01:49.677628 | orchestrator | Tuesday 03 March 2026 00:58:38 +0000 (0:00:01.356) 0:02:59.122 ********* 2026-03-03 01:01:49.677632 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.677636 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.677639 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.677643 | orchestrator | 2026-03-03 01:01:49.677647 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-03 01:01:49.677651 | orchestrator | Tuesday 03 March 2026 00:58:40 +0000 (0:00:01.969) 0:03:01.091 ********* 2026-03-03 01:01:49.677655 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.677659 | orchestrator | 2026-03-03 01:01:49.677663 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-03 01:01:49.677667 | orchestrator | Tuesday 03 March 2026 00:58:41 +0000 (0:00:01.084) 0:03:02.175 ********* 2026-03-03 01:01:49.677671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-03 01:01:49.677678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-03 01:01:49.677701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-03 01:01:49.677727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677739 | orchestrator | 2026-03-03 01:01:49.677743 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-03 01:01:49.677747 | orchestrator | Tuesday 03 March 2026 00:58:44 +0000 (0:00:03.012) 0:03:05.188 ********* 2026-03-03 01:01:49.677753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-03 01:01:49.677758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677776 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-03 01:01:49.677784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677801 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-03 01:01:49.677814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.677833 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677838 | orchestrator | 2026-03-03 01:01:49.677844 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-03 01:01:49.677850 | orchestrator | Tuesday 03 March 2026 00:58:44 +0000 (0:00:00.572) 0:03:05.760 ********* 2026-03-03 01:01:49.677857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-03 01:01:49.677863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-03 01:01:49.677869 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.677876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-03 01:01:49.677887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-03 01:01:49.677899 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.677907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-03 01:01:49.677914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-03 01:01:49.677921 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.677928 | orchestrator | 2026-03-03 01:01:49.677939 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-03 01:01:49.677946 | orchestrator | Tuesday 03 March 2026 00:58:45 +0000 (0:00:01.018) 0:03:06.779 ********* 2026-03-03 01:01:49.677952 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.677957 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.677962 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.677966 | orchestrator | 2026-03-03 01:01:49.677971 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-03 01:01:49.677975 | orchestrator | Tuesday 03 March 2026 00:58:47 +0000 (0:00:01.244) 0:03:08.024 ********* 2026-03-03 01:01:49.677980 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.677984 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.677989 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.677994 | orchestrator | 2026-03-03 01:01:49.677998 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-03 01:01:49.678003 | orchestrator | Tuesday 03 March 2026 00:58:48 +0000 (0:00:01.784) 0:03:09.808 ********* 2026-03-03 01:01:49.678008 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.678036 | orchestrator | 2026-03-03 01:01:49.678045 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-03 01:01:49.678050 | orchestrator | Tuesday 03 March 2026 00:58:49 +0000 (0:00:01.120) 0:03:10.928 ********* 2026-03-03 01:01:49.678054 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-03 01:01:49.678059 | orchestrator | 2026-03-03 01:01:49.678064 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-03 01:01:49.678068 | orchestrator | Tuesday 03 March 2026 00:58:52 +0000 (0:00:02.771) 0:03:13.700 ********* 2026-03-03 01:01:49.678074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:01:49.678091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-03 01:01:49.678095 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:01:49.678112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-03 01:01:49.678116 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:01:49.678147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-03 01:01:49.678152 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678157 | orchestrator | 2026-03-03 01:01:49.678161 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-03 01:01:49.678166 | orchestrator | Tuesday 03 March 2026 00:58:54 +0000 (0:00:01.835) 0:03:15.536 ********* 2026-03-03 01:01:49.678171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:01:49.678182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-03 01:01:49.678186 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:01:49.678212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-03 01:01:49.678216 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:01:49.678238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-03 01:01:49.678243 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678247 | orchestrator | 2026-03-03 01:01:49.678251 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-03 01:01:49.678255 | orchestrator | Tuesday 03 March 2026 00:58:56 +0000 (0:00:01.940) 0:03:17.477 ********* 2026-03-03 01:01:49.678262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-03 01:01:49.678267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-03 01:01:49.678271 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-03 01:01:49.678279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-03 01:01:49.678286 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-03 01:01:49.678297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-03 01:01:49.678301 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678305 | orchestrator | 2026-03-03 01:01:49.678309 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-03 01:01:49.678313 | orchestrator | Tuesday 03 March 2026 00:58:58 +0000 (0:00:02.358) 0:03:19.835 ********* 2026-03-03 01:01:49.678317 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.678321 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.678324 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.678328 | orchestrator | 2026-03-03 01:01:49.678332 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-03 01:01:49.678352 | orchestrator | Tuesday 03 March 2026 00:59:00 +0000 (0:00:01.918) 0:03:21.754 ********* 2026-03-03 01:01:49.678356 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678360 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678364 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678368 | orchestrator | 2026-03-03 01:01:49.678374 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-03 01:01:49.678378 | orchestrator | Tuesday 03 March 2026 00:59:01 +0000 (0:00:01.218) 0:03:22.972 ********* 2026-03-03 01:01:49.678382 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678386 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678390 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678393 | orchestrator | 2026-03-03 01:01:49.678397 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-03 01:01:49.678401 | orchestrator | Tuesday 03 March 2026 00:59:02 +0000 (0:00:00.267) 0:03:23.240 ********* 2026-03-03 01:01:49.678405 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.678408 | orchestrator | 2026-03-03 01:01:49.678412 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-03 01:01:49.678416 | orchestrator | Tuesday 03 March 2026 00:59:03 +0000 (0:00:01.149) 0:03:24.390 ********* 2026-03-03 01:01:49.678420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-03 01:01:49.678428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-03 01:01:49.678432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-03 01:01:49.678436 | orchestrator | 2026-03-03 01:01:49.678440 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-03 01:01:49.678444 | orchestrator | Tuesday 03 March 2026 00:59:04 +0000 (0:00:01.509) 0:03:25.899 ********* 2026-03-03 01:01:49.678451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-03 01:01:49.678456 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-03 01:01:49.678475 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-03 01:01:49.678486 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678490 | orchestrator | 2026-03-03 01:01:49.678494 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-03 01:01:49.678498 | orchestrator | Tuesday 03 March 2026 00:59:05 +0000 (0:00:00.406) 0:03:26.305 ********* 2026-03-03 01:01:49.678502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-03 01:01:49.678507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-03 01:01:49.678511 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678516 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-03 01:01:49.678523 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678527 | orchestrator | 2026-03-03 01:01:49.678531 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-03 01:01:49.678535 | orchestrator | Tuesday 03 March 2026 00:59:05 +0000 (0:00:00.671) 0:03:26.977 ********* 2026-03-03 01:01:49.678539 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678542 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678546 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678550 | orchestrator | 2026-03-03 01:01:49.678554 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-03 01:01:49.678558 | orchestrator | Tuesday 03 March 2026 00:59:06 +0000 (0:00:00.387) 0:03:27.365 ********* 2026-03-03 01:01:49.678562 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678566 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678569 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678573 | orchestrator | 2026-03-03 01:01:49.678577 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-03 01:01:49.678581 | orchestrator | Tuesday 03 March 2026 00:59:07 +0000 (0:00:01.078) 0:03:28.444 ********* 2026-03-03 01:01:49.678584 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.678588 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.678592 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.678596 | orchestrator | 2026-03-03 01:01:49.678600 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-03 01:01:49.678607 | orchestrator | Tuesday 03 March 2026 00:59:07 +0000 (0:00:00.259) 0:03:28.704 ********* 2026-03-03 01:01:49.678610 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.678614 | orchestrator | 2026-03-03 01:01:49.678618 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-03 01:01:49.678622 | orchestrator | Tuesday 03 March 2026 00:59:08 +0000 (0:00:01.247) 0:03:29.951 ********* 2026-03-03 01:01:49.678633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:01:49.678637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-03 01:01:49.678664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:01:49.678668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-03 01:01:49.678712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:01:49.678739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.678755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-03 01:01:49.678797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.678867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.678892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678896 | orchestrator | 2026-03-03 01:01:49.678900 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-03 01:01:49.678904 | orchestrator | Tuesday 03 March 2026 00:59:12 +0000 (0:00:03.884) 0:03:33.836 ********* 2026-03-03 01:01:49.678911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:01:49.678915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-03 01:01:49.678940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.678965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.678982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.678987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.678991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:01:49.678998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.679005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla2026-03-03 01:01:49 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:01:49.679010 | orchestrator | 2026-03-03 01:01:49 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:01:49.679014 | orchestrator | 2026-03-03 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:49.679022 | orchestrator | /neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:01:49.679029 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.679035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-03 01:01:49.679216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-03 01:01:49.679241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.679330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.679412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-03 01:01:49.679449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.679477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.679488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-03 01:01:49.679495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.679502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:01:49.679508 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.679514 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.679520 | orchestrator | 2026-03-03 01:01:49.679526 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-03 01:01:49.679532 | orchestrator | Tuesday 03 March 2026 00:59:14 +0000 (0:00:01.335) 0:03:35.172 ********* 2026-03-03 01:01:49.679544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-03 01:01:49.679551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-03 01:01:49.679557 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.679578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-03 01:01:49.679591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-03 01:01:49.679597 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.679603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-03 01:01:49.679609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-03 01:01:49.679615 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.679621 | orchestrator | 2026-03-03 01:01:49.679627 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-03 01:01:49.679632 | orchestrator | Tuesday 03 March 2026 00:59:15 +0000 (0:00:01.769) 0:03:36.941 ********* 2026-03-03 01:01:49.679638 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.679644 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.679651 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.679657 | orchestrator | 2026-03-03 01:01:49.679663 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-03 01:01:49.679669 | orchestrator | Tuesday 03 March 2026 00:59:17 +0000 (0:00:01.389) 0:03:38.331 ********* 2026-03-03 01:01:49.679675 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.679681 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.679687 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.679692 | orchestrator | 2026-03-03 01:01:49.679698 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-03 01:01:49.679704 | orchestrator | Tuesday 03 March 2026 00:59:19 +0000 (0:00:01.855) 0:03:40.187 ********* 2026-03-03 01:01:49.679710 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.679716 | orchestrator | 2026-03-03 01:01:49.679722 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-03 01:01:49.679728 | orchestrator | Tuesday 03 March 2026 00:59:20 +0000 (0:00:01.079) 0:03:41.266 ********* 2026-03-03 01:01:49.679734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.679742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.679777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.679784 | orchestrator | 2026-03-03 01:01:49.679790 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-03 01:01:49.679797 | orchestrator | Tuesday 03 March 2026 00:59:23 +0000 (0:00:03.449) 0:03:44.716 ********* 2026-03-03 01:01:49.679802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.679808 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.679815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.679821 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.679828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.679839 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.679845 | orchestrator | 2026-03-03 01:01:49.679851 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-03 01:01:49.679858 | orchestrator | Tuesday 03 March 2026 00:59:24 +0000 (0:00:00.456) 0:03:45.173 ********* 2026-03-03 01:01:49.679868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-03 01:01:49.679874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-03 01:01:49.679881 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.679903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-03 01:01:49.679908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-03 01:01:49.679912 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.679916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-03 01:01:49.679920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-03 01:01:49.679924 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.679927 | orchestrator | 2026-03-03 01:01:49.679931 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-03 01:01:49.679936 | orchestrator | Tuesday 03 March 2026 00:59:24 +0000 (0:00:00.659) 0:03:45.832 ********* 2026-03-03 01:01:49.679941 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.679945 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.679950 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.679955 | orchestrator | 2026-03-03 01:01:49.679959 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-03 01:01:49.679974 | orchestrator | Tuesday 03 March 2026 00:59:26 +0000 (0:00:01.629) 0:03:47.462 ********* 2026-03-03 01:01:49.679979 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.679983 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.679987 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.679991 | orchestrator | 2026-03-03 01:01:49.679996 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-03 01:01:49.680000 | orchestrator | Tuesday 03 March 2026 00:59:28 +0000 (0:00:01.834) 0:03:49.296 ********* 2026-03-03 01:01:49.680005 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.680009 | orchestrator | 2026-03-03 01:01:49.680014 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-03 01:01:49.680019 | orchestrator | Tuesday 03 March 2026 00:59:29 +0000 (0:00:01.303) 0:03:50.599 ********* 2026-03-03 01:01:49.680024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.680037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.680064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.680096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680106 | orchestrator | 2026-03-03 01:01:49.680110 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-03 01:01:49.680115 | orchestrator | Tuesday 03 March 2026 00:59:33 +0000 (0:00:03.695) 0:03:54.295 ********* 2026-03-03 01:01:49.680120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.680128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680137 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.680155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.680161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680173 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.680178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.680183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.680204 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.680209 | orchestrator | 2026-03-03 01:01:49.680214 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-03 01:01:49.680219 | orchestrator | Tuesday 03 March 2026 00:59:34 +0000 (0:00:00.949) 0:03:55.244 ********* 2026-03-03 01:01:49.680223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680253 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.680258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680307 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.680313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-03 01:01:49.680319 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.680325 | orchestrator | 2026-03-03 01:01:49.680330 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-03 01:01:49.680357 | orchestrator | Tuesday 03 March 2026 00:59:35 +0000 (0:00:00.789) 0:03:56.033 ********* 2026-03-03 01:01:49.680364 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.680371 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.680378 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.680384 | orchestrator | 2026-03-03 01:01:49.680454 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-03 01:01:49.680481 | orchestrator | Tuesday 03 March 2026 00:59:36 +0000 (0:00:01.333) 0:03:57.367 ********* 2026-03-03 01:01:49.680488 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.680495 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.680501 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.680506 | orchestrator | 2026-03-03 01:01:49.680510 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-03 01:01:49.680516 | orchestrator | Tuesday 03 March 2026 00:59:38 +0000 (0:00:02.044) 0:03:59.411 ********* 2026-03-03 01:01:49.680523 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.680529 | orchestrator | 2026-03-03 01:01:49.680535 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-03 01:01:49.680569 | orchestrator | Tuesday 03 March 2026 00:59:39 +0000 (0:00:01.396) 0:04:00.808 ********* 2026-03-03 01:01:49.680577 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-03 01:01:49.680584 | orchestrator | 2026-03-03 01:01:49.680590 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-03 01:01:49.680604 | orchestrator | Tuesday 03 March 2026 00:59:40 +0000 (0:00:00.762) 0:04:01.570 ********* 2026-03-03 01:01:49.680612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-03 01:01:49.680619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-03 01:01:49.680625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-03 01:01:49.680632 | orchestrator | 2026-03-03 01:01:49.680639 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-03 01:01:49.680645 | orchestrator | Tuesday 03 March 2026 00:59:44 +0000 (0:00:03.751) 0:04:05.321 ********* 2026-03-03 01:01:49.680652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.680658 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.680664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.680670 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.680681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.680687 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.680693 | orchestrator | 2026-03-03 01:01:49.680699 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-03 01:01:49.680706 | orchestrator | Tuesday 03 March 2026 00:59:45 +0000 (0:00:00.920) 0:04:06.241 ********* 2026-03-03 01:01:49.680736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-03 01:01:49.680745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-03 01:01:49.680752 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.680758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-03 01:01:49.680764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-03 01:01:49.680771 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.680777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-03 01:01:49.680783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-03 01:01:49.680789 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.680794 | orchestrator | 2026-03-03 01:01:49.680800 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-03 01:01:49.680806 | orchestrator | Tuesday 03 March 2026 00:59:46 +0000 (0:00:01.461) 0:04:07.703 ********* 2026-03-03 01:01:49.680812 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.680818 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.680824 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.680830 | orchestrator | 2026-03-03 01:01:49.680836 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-03 01:01:49.680841 | orchestrator | Tuesday 03 March 2026 00:59:48 +0000 (0:00:02.216) 0:04:09.919 ********* 2026-03-03 01:01:49.680848 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.680853 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.680859 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.680865 | orchestrator | 2026-03-03 01:01:49.680871 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-03 01:01:49.680878 | orchestrator | Tuesday 03 March 2026 00:59:51 +0000 (0:00:02.677) 0:04:12.597 ********* 2026-03-03 01:01:49.680885 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-03 01:01:49.680891 | orchestrator | 2026-03-03 01:01:49.680897 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-03 01:01:49.680902 | orchestrator | Tuesday 03 March 2026 00:59:52 +0000 (0:00:01.141) 0:04:13.739 ********* 2026-03-03 01:01:49.680909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.680916 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.680930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.680937 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.680970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.680978 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.680984 | orchestrator | 2026-03-03 01:01:49.680991 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-03 01:01:49.680996 | orchestrator | Tuesday 03 March 2026 00:59:53 +0000 (0:00:01.093) 0:04:14.832 ********* 2026-03-03 01:01:49.681002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.681008 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.681022 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-03 01:01:49.681035 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681040 | orchestrator | 2026-03-03 01:01:49.681046 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-03 01:01:49.681052 | orchestrator | Tuesday 03 March 2026 00:59:54 +0000 (0:00:01.135) 0:04:15.968 ********* 2026-03-03 01:01:49.681061 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681069 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681075 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681081 | orchestrator | 2026-03-03 01:01:49.681087 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-03 01:01:49.681094 | orchestrator | Tuesday 03 March 2026 00:59:56 +0000 (0:00:01.536) 0:04:17.504 ********* 2026-03-03 01:01:49.681106 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.681112 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.681119 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.681124 | orchestrator | 2026-03-03 01:01:49.681130 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-03 01:01:49.681135 | orchestrator | Tuesday 03 March 2026 00:59:58 +0000 (0:00:02.338) 0:04:19.843 ********* 2026-03-03 01:01:49.681142 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.681149 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.681154 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.681161 | orchestrator | 2026-03-03 01:01:49.681167 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-03 01:01:49.681173 | orchestrator | Tuesday 03 March 2026 01:00:01 +0000 (0:00:02.776) 0:04:22.620 ********* 2026-03-03 01:01:49.681180 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-03 01:01:49.681186 | orchestrator | 2026-03-03 01:01:49.681192 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-03 01:01:49.681198 | orchestrator | Tuesday 03 March 2026 01:00:02 +0000 (0:00:00.744) 0:04:23.364 ********* 2026-03-03 01:01:49.681210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-03 01:01:49.681215 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-03 01:01:49.681243 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-03 01:01:49.681251 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681255 | orchestrator | 2026-03-03 01:01:49.681259 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-03 01:01:49.681263 | orchestrator | Tuesday 03 March 2026 01:00:03 +0000 (0:00:01.131) 0:04:24.495 ********* 2026-03-03 01:01:49.681267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-03 01:01:49.681271 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-03 01:01:49.681283 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-03 01:01:49.681291 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681295 | orchestrator | 2026-03-03 01:01:49.681298 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-03 01:01:49.681302 | orchestrator | Tuesday 03 March 2026 01:00:04 +0000 (0:00:01.391) 0:04:25.887 ********* 2026-03-03 01:01:49.681306 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681310 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681314 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681318 | orchestrator | 2026-03-03 01:01:49.681322 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-03 01:01:49.681326 | orchestrator | Tuesday 03 March 2026 01:00:06 +0000 (0:00:01.157) 0:04:27.044 ********* 2026-03-03 01:01:49.681330 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.681333 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.681360 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.681364 | orchestrator | 2026-03-03 01:01:49.681368 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-03 01:01:49.681372 | orchestrator | Tuesday 03 March 2026 01:00:08 +0000 (0:00:02.027) 0:04:29.072 ********* 2026-03-03 01:01:49.681376 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.681380 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.681384 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.681388 | orchestrator | 2026-03-03 01:01:49.681395 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-03 01:01:49.681399 | orchestrator | Tuesday 03 March 2026 01:00:10 +0000 (0:00:02.729) 0:04:31.802 ********* 2026-03-03 01:01:49.681403 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.681407 | orchestrator | 2026-03-03 01:01:49.681411 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-03 01:01:49.681415 | orchestrator | Tuesday 03 March 2026 01:00:12 +0000 (0:00:01.375) 0:04:33.178 ********* 2026-03-03 01:01:49.681434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.681444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.681449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:01:49.681456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:01:49.681462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.681526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.681531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.681555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:01:49.681578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.681603 | orchestrator | 2026-03-03 01:01:49.681609 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-03 01:01:49.681615 | orchestrator | Tuesday 03 March 2026 01:00:15 +0000 (0:00:03.106) 0:04:36.284 ********* 2026-03-03 01:01:49.681621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.681627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:01:49.681637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.681674 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.681686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:01:49.681693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.681732 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.681746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:01:49.681752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:01:49.681765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:01:49.681771 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681777 | orchestrator | 2026-03-03 01:01:49.681783 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-03 01:01:49.681789 | orchestrator | Tuesday 03 March 2026 01:00:15 +0000 (0:00:00.715) 0:04:36.999 ********* 2026-03-03 01:01:49.681798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-03 01:01:49.681809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-03 01:01:49.681815 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.681834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-03 01:01:49.681841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-03 01:01:49.681847 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.681853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-03 01:01:49.681859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-03 01:01:49.681865 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.681871 | orchestrator | 2026-03-03 01:01:49.681877 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-03 01:01:49.681883 | orchestrator | Tuesday 03 March 2026 01:00:17 +0000 (0:00:01.244) 0:04:38.243 ********* 2026-03-03 01:01:49.681889 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.681895 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.681901 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.681907 | orchestrator | 2026-03-03 01:01:49.681913 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-03 01:01:49.681919 | orchestrator | Tuesday 03 March 2026 01:00:18 +0000 (0:00:01.275) 0:04:39.519 ********* 2026-03-03 01:01:49.681925 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.681931 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.681937 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.681942 | orchestrator | 2026-03-03 01:01:49.681948 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-03 01:01:49.681954 | orchestrator | Tuesday 03 March 2026 01:00:20 +0000 (0:00:02.016) 0:04:41.536 ********* 2026-03-03 01:01:49.681960 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.681966 | orchestrator | 2026-03-03 01:01:49.681972 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-03 01:01:49.681978 | orchestrator | Tuesday 03 March 2026 01:00:22 +0000 (0:00:01.612) 0:04:43.149 ********* 2026-03-03 01:01:49.681985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:01:49.681992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:01:49.682059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:01:49.682071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:01:49.682079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:01:49.682086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:01:49.682098 | orchestrator | 2026-03-03 01:01:49.682104 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-03 01:01:49.682110 | orchestrator | Tuesday 03 March 2026 01:00:27 +0000 (0:00:05.343) 0:04:48.493 ********* 2026-03-03 01:01:49.682157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:01:49.682166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:01:49.682173 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.682179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:01:49.682186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:01:49.682199 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.682227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:01:49.682235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:01:49.682240 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.682243 | orchestrator | 2026-03-03 01:01:49.682247 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-03 01:01:49.682251 | orchestrator | Tuesday 03 March 2026 01:00:28 +0000 (0:00:00.687) 0:04:49.181 ********* 2026-03-03 01:01:49.682255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-03 01:01:49.682259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-03 01:01:49.682264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-03 01:01:49.682268 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.682272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-03 01:01:49.682280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-03 01:01:49.682284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-03 01:01:49.682288 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.682292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-03 01:01:49.682296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-03 01:01:49.682300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-03 01:01:49.682304 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.682308 | orchestrator | 2026-03-03 01:01:49.682312 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-03 01:01:49.682320 | orchestrator | Tuesday 03 March 2026 01:00:29 +0000 (0:00:00.925) 0:04:50.106 ********* 2026-03-03 01:01:49.682325 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.682328 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.682332 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.682357 | orchestrator | 2026-03-03 01:01:49.682361 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-03 01:01:49.682365 | orchestrator | Tuesday 03 March 2026 01:00:29 +0000 (0:00:00.866) 0:04:50.973 ********* 2026-03-03 01:01:49.682369 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.682376 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.682382 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.682391 | orchestrator | 2026-03-03 01:01:49.682418 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-03 01:01:49.682425 | orchestrator | Tuesday 03 March 2026 01:00:31 +0000 (0:00:01.300) 0:04:52.274 ********* 2026-03-03 01:01:49.682430 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.682436 | orchestrator | 2026-03-03 01:01:49.682441 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-03 01:01:49.682447 | orchestrator | Tuesday 03 March 2026 01:00:32 +0000 (0:00:01.353) 0:04:53.628 ********* 2026-03-03 01:01:49.682454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:01:49.682460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:01:49.682474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:01:49.682522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:01:49.682529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:01:49.682561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:01:49.682571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:01:49.682607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-03 01:01:49.682615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:01:49.682651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-03 01:01:49.682657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:01:49.682692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-03 01:01:49.682705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682724 | orchestrator | 2026-03-03 01:01:49.682730 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-03 01:01:49.682736 | orchestrator | Tuesday 03 March 2026 01:00:36 +0000 (0:00:04.255) 0:04:57.884 ********* 2026-03-03 01:01:49.682743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-03 01:01:49.682752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:01:49.682764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-03 01:01:49.682798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-03 01:01:49.682807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-03 01:01:49.682820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:01:49.682838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682858 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.682864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-03 01:01:49.682902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-03 01:01:49.682910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-03 01:01:49.682916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:01:49.682929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682965 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.682972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.682979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.682986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-03 01:01:49.682993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-03 01:01:49.683003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.683021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:01:49.683028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:01:49.683034 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683041 | orchestrator | 2026-03-03 01:01:49.683047 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-03 01:01:49.683054 | orchestrator | Tuesday 03 March 2026 01:00:37 +0000 (0:00:00.740) 0:04:58.624 ********* 2026-03-03 01:01:49.683060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-03 01:01:49.683067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-03 01:01:49.683075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-03 01:01:49.683082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-03 01:01:49.683089 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-03 01:01:49.683101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-03 01:01:49.683108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-03 01:01:49.683114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-03 01:01:49.683121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-03 01:01:49.683132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-03 01:01:49.683139 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-03 01:01:49.683178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-03 01:01:49.683185 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683191 | orchestrator | 2026-03-03 01:01:49.683197 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-03 01:01:49.683203 | orchestrator | Tuesday 03 March 2026 01:00:38 +0000 (0:00:00.989) 0:04:59.614 ********* 2026-03-03 01:01:49.683209 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683216 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683222 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683228 | orchestrator | 2026-03-03 01:01:49.683235 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-03 01:01:49.683241 | orchestrator | Tuesday 03 March 2026 01:00:39 +0000 (0:00:00.460) 0:05:00.074 ********* 2026-03-03 01:01:49.683247 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683253 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683260 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683266 | orchestrator | 2026-03-03 01:01:49.683272 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-03 01:01:49.683278 | orchestrator | Tuesday 03 March 2026 01:00:40 +0000 (0:00:01.495) 0:05:01.569 ********* 2026-03-03 01:01:49.683284 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.683290 | orchestrator | 2026-03-03 01:01:49.683295 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-03 01:01:49.683301 | orchestrator | Tuesday 03 March 2026 01:00:42 +0000 (0:00:01.691) 0:05:03.261 ********* 2026-03-03 01:01:49.683307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 01:01:49.683314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 01:01:49.683329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-03 01:01:49.683374 | orchestrator | 2026-03-03 01:01:49.683381 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-03 01:01:49.683392 | orchestrator | Tuesday 03 March 2026 01:00:44 +0000 (0:00:02.498) 0:05:05.760 ********* 2026-03-03 01:01:49.683399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-03 01:01:49.683406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-03 01:01:49.683413 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683420 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-03 01:01:49.683439 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683446 | orchestrator | 2026-03-03 01:01:49.683453 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-03 01:01:49.683460 | orchestrator | Tuesday 03 March 2026 01:00:45 +0000 (0:00:00.744) 0:05:06.504 ********* 2026-03-03 01:01:49.683467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-03 01:01:49.683474 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-03 01:01:49.683491 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-03 01:01:49.683504 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683511 | orchestrator | 2026-03-03 01:01:49.683518 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-03 01:01:49.683525 | orchestrator | Tuesday 03 March 2026 01:00:46 +0000 (0:00:00.667) 0:05:07.172 ********* 2026-03-03 01:01:49.683532 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683542 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683549 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683556 | orchestrator | 2026-03-03 01:01:49.683563 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-03 01:01:49.683570 | orchestrator | Tuesday 03 March 2026 01:00:46 +0000 (0:00:00.464) 0:05:07.636 ********* 2026-03-03 01:01:49.683577 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683583 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683590 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683596 | orchestrator | 2026-03-03 01:01:49.683602 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-03 01:01:49.683607 | orchestrator | Tuesday 03 March 2026 01:00:47 +0000 (0:00:01.332) 0:05:08.968 ********* 2026-03-03 01:01:49.683613 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:01:49.683618 | orchestrator | 2026-03-03 01:01:49.683625 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-03 01:01:49.683632 | orchestrator | Tuesday 03 March 2026 01:00:49 +0000 (0:00:01.728) 0:05:10.697 ********* 2026-03-03 01:01:49.683639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.683653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.683660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.683675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.683684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.683695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-03 01:01:49.683702 | orchestrator | 2026-03-03 01:01:49.683709 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-03 01:01:49.683715 | orchestrator | Tuesday 03 March 2026 01:00:55 +0000 (0:00:06.283) 0:05:16.980 ********* 2026-03-03 01:01:49.683722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.683735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.683742 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.683760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.683764 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.683775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-03 01:01:49.683779 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683783 | orchestrator | 2026-03-03 01:01:49.683787 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-03 01:01:49.683793 | orchestrator | Tuesday 03 March 2026 01:00:56 +0000 (0:00:00.662) 0:05:17.643 ********* 2026-03-03 01:01:49.683797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683817 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683836 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-03 01:01:49.683856 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683860 | orchestrator | 2026-03-03 01:01:49.683864 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-03 01:01:49.683868 | orchestrator | Tuesday 03 March 2026 01:00:58 +0000 (0:00:01.623) 0:05:19.267 ********* 2026-03-03 01:01:49.683872 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.683876 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.683879 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.683885 | orchestrator | 2026-03-03 01:01:49.683892 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-03 01:01:49.683897 | orchestrator | Tuesday 03 March 2026 01:00:59 +0000 (0:00:01.150) 0:05:20.418 ********* 2026-03-03 01:01:49.683904 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.683913 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.683921 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.683927 | orchestrator | 2026-03-03 01:01:49.683933 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-03 01:01:49.683939 | orchestrator | Tuesday 03 March 2026 01:01:01 +0000 (0:00:02.105) 0:05:22.523 ********* 2026-03-03 01:01:49.683945 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683950 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683956 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.683962 | orchestrator | 2026-03-03 01:01:49.683968 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-03 01:01:49.683974 | orchestrator | Tuesday 03 March 2026 01:01:01 +0000 (0:00:00.374) 0:05:22.897 ********* 2026-03-03 01:01:49.683985 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.683991 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.683997 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684003 | orchestrator | 2026-03-03 01:01:49.684009 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-03 01:01:49.684018 | orchestrator | Tuesday 03 March 2026 01:01:02 +0000 (0:00:00.344) 0:05:23.241 ********* 2026-03-03 01:01:49.684025 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684031 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684036 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684043 | orchestrator | 2026-03-03 01:01:49.684049 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-03 01:01:49.684055 | orchestrator | Tuesday 03 March 2026 01:01:02 +0000 (0:00:00.673) 0:05:23.915 ********* 2026-03-03 01:01:49.684061 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684068 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684074 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684080 | orchestrator | 2026-03-03 01:01:49.684084 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-03 01:01:49.684090 | orchestrator | Tuesday 03 March 2026 01:01:03 +0000 (0:00:00.362) 0:05:24.278 ********* 2026-03-03 01:01:49.684096 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684102 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684107 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684115 | orchestrator | 2026-03-03 01:01:49.684123 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-03 01:01:49.684129 | orchestrator | Tuesday 03 March 2026 01:01:03 +0000 (0:00:00.318) 0:05:24.596 ********* 2026-03-03 01:01:49.684135 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684140 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684146 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684152 | orchestrator | 2026-03-03 01:01:49.684157 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-03 01:01:49.684163 | orchestrator | Tuesday 03 March 2026 01:01:04 +0000 (0:00:00.840) 0:05:25.436 ********* 2026-03-03 01:01:49.684169 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684176 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684182 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684187 | orchestrator | 2026-03-03 01:01:49.684193 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-03 01:01:49.684199 | orchestrator | Tuesday 03 March 2026 01:01:05 +0000 (0:00:00.730) 0:05:26.167 ********* 2026-03-03 01:01:49.684205 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684211 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684217 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684222 | orchestrator | 2026-03-03 01:01:49.684228 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-03 01:01:49.684234 | orchestrator | Tuesday 03 March 2026 01:01:05 +0000 (0:00:00.383) 0:05:26.551 ********* 2026-03-03 01:01:49.684240 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684246 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684252 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684258 | orchestrator | 2026-03-03 01:01:49.684264 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-03 01:01:49.684270 | orchestrator | Tuesday 03 March 2026 01:01:06 +0000 (0:00:01.033) 0:05:27.584 ********* 2026-03-03 01:01:49.684276 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684282 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684288 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684294 | orchestrator | 2026-03-03 01:01:49.684300 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-03 01:01:49.684306 | orchestrator | Tuesday 03 March 2026 01:01:08 +0000 (0:00:01.481) 0:05:29.066 ********* 2026-03-03 01:01:49.684312 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684324 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684330 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684352 | orchestrator | 2026-03-03 01:01:49.684358 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-03 01:01:49.684364 | orchestrator | Tuesday 03 March 2026 01:01:09 +0000 (0:00:00.942) 0:05:30.008 ********* 2026-03-03 01:01:49.684372 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.684378 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.684384 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.684391 | orchestrator | 2026-03-03 01:01:49.684397 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-03 01:01:49.684403 | orchestrator | Tuesday 03 March 2026 01:01:18 +0000 (0:00:09.952) 0:05:39.960 ********* 2026-03-03 01:01:49.684410 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684416 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684423 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684429 | orchestrator | 2026-03-03 01:01:49.684435 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-03 01:01:49.684442 | orchestrator | Tuesday 03 March 2026 01:01:19 +0000 (0:00:00.750) 0:05:40.711 ********* 2026-03-03 01:01:49.684448 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.684455 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.684461 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.684468 | orchestrator | 2026-03-03 01:01:49.684474 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-03 01:01:49.684480 | orchestrator | Tuesday 03 March 2026 01:01:32 +0000 (0:00:13.164) 0:05:53.875 ********* 2026-03-03 01:01:49.684487 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684493 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684499 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684506 | orchestrator | 2026-03-03 01:01:49.684512 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-03 01:01:49.684518 | orchestrator | Tuesday 03 March 2026 01:01:33 +0000 (0:00:00.713) 0:05:54.589 ********* 2026-03-03 01:01:49.684525 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:01:49.684589 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:01:49.684607 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:01:49.684614 | orchestrator | 2026-03-03 01:01:49.684623 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-03 01:01:49.684630 | orchestrator | Tuesday 03 March 2026 01:01:42 +0000 (0:00:08.715) 0:06:03.305 ********* 2026-03-03 01:01:49.684636 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684643 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684650 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684656 | orchestrator | 2026-03-03 01:01:49.684662 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-03 01:01:49.684669 | orchestrator | Tuesday 03 March 2026 01:01:42 +0000 (0:00:00.359) 0:06:03.665 ********* 2026-03-03 01:01:49.684675 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684687 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684694 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684700 | orchestrator | 2026-03-03 01:01:49.684707 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-03 01:01:49.684713 | orchestrator | Tuesday 03 March 2026 01:01:43 +0000 (0:00:00.664) 0:06:04.329 ********* 2026-03-03 01:01:49.684719 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684726 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684732 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684738 | orchestrator | 2026-03-03 01:01:49.684745 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-03 01:01:49.684752 | orchestrator | Tuesday 03 March 2026 01:01:43 +0000 (0:00:00.358) 0:06:04.688 ********* 2026-03-03 01:01:49.684758 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684764 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684771 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684783 | orchestrator | 2026-03-03 01:01:49.684790 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-03 01:01:49.684796 | orchestrator | Tuesday 03 March 2026 01:01:44 +0000 (0:00:00.344) 0:06:05.033 ********* 2026-03-03 01:01:49.684799 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684803 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684807 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684811 | orchestrator | 2026-03-03 01:01:49.684815 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-03 01:01:49.684818 | orchestrator | Tuesday 03 March 2026 01:01:44 +0000 (0:00:00.357) 0:06:05.390 ********* 2026-03-03 01:01:49.684822 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:01:49.684826 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:01:49.684830 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:01:49.684833 | orchestrator | 2026-03-03 01:01:49.684837 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-03 01:01:49.684841 | orchestrator | Tuesday 03 March 2026 01:01:44 +0000 (0:00:00.344) 0:06:05.735 ********* 2026-03-03 01:01:49.684845 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684849 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684852 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684856 | orchestrator | 2026-03-03 01:01:49.684860 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-03 01:01:49.684864 | orchestrator | Tuesday 03 March 2026 01:01:46 +0000 (0:00:01.362) 0:06:07.097 ********* 2026-03-03 01:01:49.684868 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:01:49.684871 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:01:49.684875 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:01:49.684879 | orchestrator | 2026-03-03 01:01:49.684883 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:01:49.684887 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-03 01:01:49.684892 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-03 01:01:49.684896 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-03 01:01:49.684900 | orchestrator | 2026-03-03 01:01:49.684903 | orchestrator | 2026-03-03 01:01:49.684907 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:01:49.684911 | orchestrator | Tuesday 03 March 2026 01:01:46 +0000 (0:00:00.892) 0:06:07.990 ********* 2026-03-03 01:01:49.684915 | orchestrator | =============================================================================== 2026-03-03 01:01:49.684918 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.16s 2026-03-03 01:01:49.684922 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.95s 2026-03-03 01:01:49.684926 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.72s 2026-03-03 01:01:49.684930 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.28s 2026-03-03 01:01:49.684933 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.34s 2026-03-03 01:01:49.684937 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.01s 2026-03-03 01:01:49.684941 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.84s 2026-03-03 01:01:49.684945 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.75s 2026-03-03 01:01:49.684948 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.68s 2026-03-03 01:01:49.684952 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.34s 2026-03-03 01:01:49.684956 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.26s 2026-03-03 01:01:49.684963 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.24s 2026-03-03 01:01:49.684967 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.21s 2026-03-03 01:01:49.684970 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.02s 2026-03-03 01:01:49.684977 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.01s 2026-03-03 01:01:49.684981 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.88s 2026-03-03 01:01:49.684985 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.75s 2026-03-03 01:01:49.684989 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.70s 2026-03-03 01:01:49.684993 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.55s 2026-03-03 01:01:49.684997 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.46s 2026-03-03 01:01:52.734536 | orchestrator | 2026-03-03 01:01:52 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:52.736844 | orchestrator | 2026-03-03 01:01:52 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:01:52.738813 | orchestrator | 2026-03-03 01:01:52 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:01:52.738844 | orchestrator | 2026-03-03 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:55.775507 | orchestrator | 2026-03-03 01:01:55 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:55.777341 | orchestrator | 2026-03-03 01:01:55 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:01:55.779406 | orchestrator | 2026-03-03 01:01:55 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:01:55.779679 | orchestrator | 2026-03-03 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:01:58.814607 | orchestrator | 2026-03-03 01:01:58 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:01:58.815090 | orchestrator | 2026-03-03 01:01:58 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:01:58.815676 | orchestrator | 2026-03-03 01:01:58 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:01:58.815714 | orchestrator | 2026-03-03 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:01.851083 | orchestrator | 2026-03-03 01:02:01 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:01.851521 | orchestrator | 2026-03-03 01:02:01 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:01.852057 | orchestrator | 2026-03-03 01:02:01 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:01.852113 | orchestrator | 2026-03-03 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:04.930874 | orchestrator | 2026-03-03 01:02:04 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:04.931148 | orchestrator | 2026-03-03 01:02:04 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:04.931835 | orchestrator | 2026-03-03 01:02:04 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:04.931858 | orchestrator | 2026-03-03 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:07.991951 | orchestrator | 2026-03-03 01:02:07 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:07.992001 | orchestrator | 2026-03-03 01:02:07 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:07.992020 | orchestrator | 2026-03-03 01:02:07 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:07.992024 | orchestrator | 2026-03-03 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:10.993021 | orchestrator | 2026-03-03 01:02:10 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:10.993074 | orchestrator | 2026-03-03 01:02:10 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:10.993082 | orchestrator | 2026-03-03 01:02:10 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:10.993088 | orchestrator | 2026-03-03 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:14.021975 | orchestrator | 2026-03-03 01:02:14 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:14.022077 | orchestrator | 2026-03-03 01:02:14 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:14.022090 | orchestrator | 2026-03-03 01:02:14 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:14.022114 | orchestrator | 2026-03-03 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:17.070848 | orchestrator | 2026-03-03 01:02:17 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:17.070924 | orchestrator | 2026-03-03 01:02:17 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:17.075301 | orchestrator | 2026-03-03 01:02:17 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:17.075362 | orchestrator | 2026-03-03 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:20.104954 | orchestrator | 2026-03-03 01:02:20 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:20.106168 | orchestrator | 2026-03-03 01:02:20 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:20.107860 | orchestrator | 2026-03-03 01:02:20 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:20.108603 | orchestrator | 2026-03-03 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:23.145235 | orchestrator | 2026-03-03 01:02:23 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:23.146884 | orchestrator | 2026-03-03 01:02:23 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:23.148473 | orchestrator | 2026-03-03 01:02:23 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:23.148517 | orchestrator | 2026-03-03 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:26.193052 | orchestrator | 2026-03-03 01:02:26 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:26.194343 | orchestrator | 2026-03-03 01:02:26 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:26.195698 | orchestrator | 2026-03-03 01:02:26 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:26.195739 | orchestrator | 2026-03-03 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:29.234122 | orchestrator | 2026-03-03 01:02:29 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:29.234970 | orchestrator | 2026-03-03 01:02:29 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:29.235591 | orchestrator | 2026-03-03 01:02:29 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:29.235640 | orchestrator | 2026-03-03 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:32.277938 | orchestrator | 2026-03-03 01:02:32 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:32.280172 | orchestrator | 2026-03-03 01:02:32 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:32.282502 | orchestrator | 2026-03-03 01:02:32 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:32.282550 | orchestrator | 2026-03-03 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:35.326956 | orchestrator | 2026-03-03 01:02:35 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:35.327926 | orchestrator | 2026-03-03 01:02:35 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:35.329008 | orchestrator | 2026-03-03 01:02:35 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:35.329163 | orchestrator | 2026-03-03 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:38.382927 | orchestrator | 2026-03-03 01:02:38 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:38.383727 | orchestrator | 2026-03-03 01:02:38 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:38.386923 | orchestrator | 2026-03-03 01:02:38 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:38.386997 | orchestrator | 2026-03-03 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:41.432489 | orchestrator | 2026-03-03 01:02:41 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:41.435188 | orchestrator | 2026-03-03 01:02:41 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:41.437564 | orchestrator | 2026-03-03 01:02:41 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:41.437691 | orchestrator | 2026-03-03 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:44.476582 | orchestrator | 2026-03-03 01:02:44 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:44.478432 | orchestrator | 2026-03-03 01:02:44 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:44.479944 | orchestrator | 2026-03-03 01:02:44 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:44.479980 | orchestrator | 2026-03-03 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:47.522001 | orchestrator | 2026-03-03 01:02:47 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:47.523795 | orchestrator | 2026-03-03 01:02:47 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:47.526076 | orchestrator | 2026-03-03 01:02:47 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:47.526116 | orchestrator | 2026-03-03 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:50.569733 | orchestrator | 2026-03-03 01:02:50 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:50.573430 | orchestrator | 2026-03-03 01:02:50 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:50.576079 | orchestrator | 2026-03-03 01:02:50 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:50.576473 | orchestrator | 2026-03-03 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:53.634520 | orchestrator | 2026-03-03 01:02:53 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:53.636662 | orchestrator | 2026-03-03 01:02:53 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:53.639083 | orchestrator | 2026-03-03 01:02:53 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:53.639291 | orchestrator | 2026-03-03 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:56.681232 | orchestrator | 2026-03-03 01:02:56 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:56.682742 | orchestrator | 2026-03-03 01:02:56 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:56.685075 | orchestrator | 2026-03-03 01:02:56 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:56.685383 | orchestrator | 2026-03-03 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:02:59.723683 | orchestrator | 2026-03-03 01:02:59 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:02:59.724471 | orchestrator | 2026-03-03 01:02:59 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:02:59.728193 | orchestrator | 2026-03-03 01:02:59 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:02:59.728274 | orchestrator | 2026-03-03 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:02.766250 | orchestrator | 2026-03-03 01:03:02 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:02.768124 | orchestrator | 2026-03-03 01:03:02 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:02.770329 | orchestrator | 2026-03-03 01:03:02 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:02.770427 | orchestrator | 2026-03-03 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:05.817393 | orchestrator | 2026-03-03 01:03:05 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:05.819044 | orchestrator | 2026-03-03 01:03:05 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:05.820617 | orchestrator | 2026-03-03 01:03:05 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:05.820716 | orchestrator | 2026-03-03 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:08.865090 | orchestrator | 2026-03-03 01:03:08 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:08.866165 | orchestrator | 2026-03-03 01:03:08 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:08.869330 | orchestrator | 2026-03-03 01:03:08 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:08.869388 | orchestrator | 2026-03-03 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:11.914778 | orchestrator | 2026-03-03 01:03:11 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:11.915628 | orchestrator | 2026-03-03 01:03:11 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:11.918806 | orchestrator | 2026-03-03 01:03:11 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:11.919661 | orchestrator | 2026-03-03 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:14.960895 | orchestrator | 2026-03-03 01:03:14 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:14.962591 | orchestrator | 2026-03-03 01:03:14 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:14.964092 | orchestrator | 2026-03-03 01:03:14 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:14.964163 | orchestrator | 2026-03-03 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:18.000429 | orchestrator | 2026-03-03 01:03:18 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:18.002465 | orchestrator | 2026-03-03 01:03:18 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:18.004405 | orchestrator | 2026-03-03 01:03:18 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:18.004460 | orchestrator | 2026-03-03 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:21.042233 | orchestrator | 2026-03-03 01:03:21 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:21.044065 | orchestrator | 2026-03-03 01:03:21 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:21.046036 | orchestrator | 2026-03-03 01:03:21 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:21.046096 | orchestrator | 2026-03-03 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:24.087281 | orchestrator | 2026-03-03 01:03:24 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state STARTED 2026-03-03 01:03:24.090430 | orchestrator | 2026-03-03 01:03:24 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:24.093770 | orchestrator | 2026-03-03 01:03:24 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:24.093833 | orchestrator | 2026-03-03 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:27.150351 | orchestrator | 2026-03-03 01:03:27 | INFO  | Task edcd954b-f9d5-453d-8ad7-15852b567718 is in state SUCCESS 2026-03-03 01:03:27.158000 | orchestrator | 2026-03-03 01:03:27.158087 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-03 01:03:27.158094 | orchestrator | 2.16.14 2026-03-03 01:03:27.158098 | orchestrator | 2026-03-03 01:03:27.158101 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-03 01:03:27.158105 | orchestrator | 2026-03-03 01:03:27.158108 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-03 01:03:27.158112 | orchestrator | Tuesday 03 March 2026 00:53:23 +0000 (0:00:00.758) 0:00:00.759 ********* 2026-03-03 01:03:27.158116 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.158120 | orchestrator | 2026-03-03 01:03:27.158124 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-03 01:03:27.158127 | orchestrator | Tuesday 03 March 2026 00:53:24 +0000 (0:00:01.096) 0:00:01.855 ********* 2026-03-03 01:03:27.158130 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158133 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158137 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158140 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158143 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158146 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158149 | orchestrator | 2026-03-03 01:03:27.158152 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-03 01:03:27.158155 | orchestrator | Tuesday 03 March 2026 00:53:26 +0000 (0:00:01.373) 0:00:03.229 ********* 2026-03-03 01:03:27.158170 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158173 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158176 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158179 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158182 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158187 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158192 | orchestrator | 2026-03-03 01:03:27.158230 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-03 01:03:27.158238 | orchestrator | Tuesday 03 March 2026 00:53:27 +0000 (0:00:00.881) 0:00:04.110 ********* 2026-03-03 01:03:27.158293 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158299 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158304 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158310 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158314 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158317 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158320 | orchestrator | 2026-03-03 01:03:27.158329 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-03 01:03:27.158333 | orchestrator | Tuesday 03 March 2026 00:53:28 +0000 (0:00:00.895) 0:00:05.006 ********* 2026-03-03 01:03:27.158336 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158339 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158342 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158346 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158349 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158352 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158355 | orchestrator | 2026-03-03 01:03:27.158358 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-03 01:03:27.158361 | orchestrator | Tuesday 03 March 2026 00:53:28 +0000 (0:00:00.818) 0:00:05.824 ********* 2026-03-03 01:03:27.158364 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158368 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158371 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158374 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158377 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158380 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158383 | orchestrator | 2026-03-03 01:03:27.158386 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-03 01:03:27.158389 | orchestrator | Tuesday 03 March 2026 00:53:29 +0000 (0:00:00.656) 0:00:06.481 ********* 2026-03-03 01:03:27.158392 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158395 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158398 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158401 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158404 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158407 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158410 | orchestrator | 2026-03-03 01:03:27.158414 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-03 01:03:27.158417 | orchestrator | Tuesday 03 March 2026 00:53:30 +0000 (0:00:01.080) 0:00:07.561 ********* 2026-03-03 01:03:27.158420 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.158424 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.158428 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.158434 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.158439 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.158444 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.158630 | orchestrator | 2026-03-03 01:03:27.158636 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-03 01:03:27.158639 | orchestrator | Tuesday 03 March 2026 00:53:31 +0000 (0:00:00.745) 0:00:08.307 ********* 2026-03-03 01:03:27.158643 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158648 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158653 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158658 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158663 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158732 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158740 | orchestrator | 2026-03-03 01:03:27.158746 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-03 01:03:27.158751 | orchestrator | Tuesday 03 March 2026 00:53:32 +0000 (0:00:01.002) 0:00:09.310 ********* 2026-03-03 01:03:27.158757 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:03:27.158761 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:03:27.158764 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:03:27.158767 | orchestrator | 2026-03-03 01:03:27.158771 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-03 01:03:27.158774 | orchestrator | Tuesday 03 March 2026 00:53:32 +0000 (0:00:00.577) 0:00:09.887 ********* 2026-03-03 01:03:27.158777 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158780 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158783 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158794 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158797 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158801 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158804 | orchestrator | 2026-03-03 01:03:27.158807 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-03 01:03:27.158810 | orchestrator | Tuesday 03 March 2026 00:53:34 +0000 (0:00:01.666) 0:00:11.553 ********* 2026-03-03 01:03:27.158813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:03:27.158816 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:03:27.158819 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:03:27.158822 | orchestrator | 2026-03-03 01:03:27.158825 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-03 01:03:27.158828 | orchestrator | Tuesday 03 March 2026 00:53:37 +0000 (0:00:02.527) 0:00:14.081 ********* 2026-03-03 01:03:27.158832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-03 01:03:27.158835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-03 01:03:27.158838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-03 01:03:27.158841 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.158844 | orchestrator | 2026-03-03 01:03:27.158847 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-03 01:03:27.158851 | orchestrator | Tuesday 03 March 2026 00:53:37 +0000 (0:00:00.861) 0:00:14.942 ********* 2026-03-03 01:03:27.158855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158869 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.158873 | orchestrator | 2026-03-03 01:03:27.158876 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-03 01:03:27.158879 | orchestrator | Tuesday 03 March 2026 00:53:38 +0000 (0:00:00.848) 0:00:15.791 ********* 2026-03-03 01:03:27.158883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158910 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.158913 | orchestrator | 2026-03-03 01:03:27.158916 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-03 01:03:27.158920 | orchestrator | Tuesday 03 March 2026 00:53:39 +0000 (0:00:00.516) 0:00:16.308 ********* 2026-03-03 01:03:27.158928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-03 00:53:35.247108', 'end': '2026-03-03 00:53:35.334871', 'delta': '0:00:00.087763', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-03 00:53:36.044699', 'end': '2026-03-03 00:53:36.145782', 'delta': '0:00:00.101083', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-03 00:53:36.798933', 'end': '2026-03-03 00:53:36.901576', 'delta': '0:00:00.102643', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.158941 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.158944 | orchestrator | 2026-03-03 01:03:27.158949 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-03 01:03:27.158953 | orchestrator | Tuesday 03 March 2026 00:53:39 +0000 (0:00:00.188) 0:00:16.497 ********* 2026-03-03 01:03:27.158956 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.158964 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.158967 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.158970 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.158973 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.158977 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.158980 | orchestrator | 2026-03-03 01:03:27.158983 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-03 01:03:27.158986 | orchestrator | Tuesday 03 March 2026 00:53:40 +0000 (0:00:00.776) 0:00:17.273 ********* 2026-03-03 01:03:27.158989 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.158993 | orchestrator | 2026-03-03 01:03:27.158998 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-03 01:03:27.159004 | orchestrator | Tuesday 03 March 2026 00:53:41 +0000 (0:00:00.850) 0:00:18.123 ********* 2026-03-03 01:03:27.159009 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159304 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159315 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159320 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159323 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159327 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159330 | orchestrator | 2026-03-03 01:03:27.159333 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-03 01:03:27.159337 | orchestrator | Tuesday 03 March 2026 00:53:42 +0000 (0:00:01.114) 0:00:19.238 ********* 2026-03-03 01:03:27.159341 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159346 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159351 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159356 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159361 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159366 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159371 | orchestrator | 2026-03-03 01:03:27.159376 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-03 01:03:27.159381 | orchestrator | Tuesday 03 March 2026 00:53:43 +0000 (0:00:01.485) 0:00:20.723 ********* 2026-03-03 01:03:27.159419 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159424 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159427 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159430 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159434 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159437 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159440 | orchestrator | 2026-03-03 01:03:27.159443 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-03 01:03:27.159446 | orchestrator | Tuesday 03 March 2026 00:53:46 +0000 (0:00:02.518) 0:00:23.242 ********* 2026-03-03 01:03:27.159449 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159452 | orchestrator | 2026-03-03 01:03:27.159456 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-03 01:03:27.159459 | orchestrator | Tuesday 03 March 2026 00:53:46 +0000 (0:00:00.288) 0:00:23.530 ********* 2026-03-03 01:03:27.159462 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159465 | orchestrator | 2026-03-03 01:03:27.159468 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-03 01:03:27.159471 | orchestrator | Tuesday 03 March 2026 00:53:46 +0000 (0:00:00.261) 0:00:23.791 ********* 2026-03-03 01:03:27.159475 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159478 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159481 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159495 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159499 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159502 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159505 | orchestrator | 2026-03-03 01:03:27.159509 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-03 01:03:27.159514 | orchestrator | Tuesday 03 March 2026 00:53:48 +0000 (0:00:01.293) 0:00:25.085 ********* 2026-03-03 01:03:27.159526 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159531 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159536 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159541 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159547 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159552 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159557 | orchestrator | 2026-03-03 01:03:27.159562 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-03 01:03:27.159568 | orchestrator | Tuesday 03 March 2026 00:53:49 +0000 (0:00:01.167) 0:00:26.252 ********* 2026-03-03 01:03:27.159573 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159578 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159583 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159588 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159593 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159599 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159603 | orchestrator | 2026-03-03 01:03:27.159609 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-03 01:03:27.159614 | orchestrator | Tuesday 03 March 2026 00:53:50 +0000 (0:00:01.009) 0:00:27.262 ********* 2026-03-03 01:03:27.159620 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159625 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159630 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159635 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159640 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159645 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159650 | orchestrator | 2026-03-03 01:03:27.159655 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-03 01:03:27.159660 | orchestrator | Tuesday 03 March 2026 00:53:51 +0000 (0:00:01.089) 0:00:28.352 ********* 2026-03-03 01:03:27.159666 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159671 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159675 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159681 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159686 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159691 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159696 | orchestrator | 2026-03-03 01:03:27.159705 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-03 01:03:27.159710 | orchestrator | Tuesday 03 March 2026 00:53:51 +0000 (0:00:00.582) 0:00:28.934 ********* 2026-03-03 01:03:27.159716 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.159721 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.159726 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.159732 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.159737 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.159742 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.159982 | orchestrator | 2026-03-03 01:03:27.159992 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-03 01:03:27.159997 | orchestrator | Tuesday 03 March 2026 00:53:52 +0000 (0:00:00.890) 0:00:29.825 ********* 2026-03-03 01:03:27.160003 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.160008 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.160013 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.160018 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.160023 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.160028 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.160033 | orchestrator | 2026-03-03 01:03:27.160039 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-03 01:03:27.160044 | orchestrator | Tuesday 03 March 2026 00:53:53 +0000 (0:00:00.549) 0:00:30.374 ********* 2026-03-03 01:03:27.160051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973', 'dm-uuid-LVM-GpJP3SwEqN8IRMzzg27rllwSIVirHlhSfyzZPmY6R0Kn9YDJtp0fc4Q7CuoV0X63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319', 'dm-uuid-LVM-9EM2pLoCc81f2X7Vie2gvZeoKVsOO03V8d2PXcJGe3Ps8WTrewvxmi6DdodPaJYy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd', 'dm-uuid-LVM-FzkgkoVfb2RnZHeeixvaBLUlzwoz3GmBKkIXQJrdo7uwaev79qNVS5X3yHIcAGus'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf', 'dm-uuid-LVM-2KOMfDqnadxchcrcgKh2pqnIyHgmTXEvWp5NIBI4IIH0Z87KkSOTClHBnaFxSsBv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hJi2Py-81jO-thE3-PeUa-ee3o-6IJn-t2lTlM', 'scsi-0QEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702', 'scsi-SQEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZIQTi-GOs5-CdQ0-0JfI-XO4A-PPhc-92rEKT', 'scsi-0QEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473', 'scsi-SQEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8', 'scsi-SQEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab', 'dm-uuid-LVM-rzWUWsHInSLRWdrp72kGd49H55Q2diyIak9DoOb0xRhEavC39dzPF5cbOf6a2zzB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eimw9W-MYPI-UA59-afLt-X9H7-b3VL-NmfrZ3', 'scsi-0QEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc', 'scsi-SQEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1', 'dm-uuid-LVM-ETIN2cURdX3qKY8G784R8MS3Xrl7JPk1NOvKGIXLGbfYZvO5OlWQEi5VkrkwES6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nHF385-RKkI-sMjx-wEKT-CEHl-TQSH-tknXGo', 'scsi-0QEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d', 'scsi-SQEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4', 'scsi-SQEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160805 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.160808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.160839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160972 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.160978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.160997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part1', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part14', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part15', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part16', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TSkgdd-4u32-vhZa-Igw8-mdVc-zUOc-5kXbMO', 'scsi-0QEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361', 'scsi-SQEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161123 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.161126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g33K7x-lc71-nV2n-50c4-euam-kr43-sc7tcb', 'scsi-0QEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301', 'scsi-SQEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53', 'scsi-SQEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161174 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.161179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161192 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.161196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:03:27.161242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part1', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part14', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part15', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part16', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:03:27.161271 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.161274 | orchestrator | 2026-03-03 01:03:27.161279 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-03 01:03:27.161288 | orchestrator | Tuesday 03 March 2026 00:53:54 +0000 (0:00:01.284) 0:00:31.659 ********* 2026-03-03 01:03:27.161302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973', 'dm-uuid-LVM-GpJP3SwEqN8IRMzzg27rllwSIVirHlhSfyzZPmY6R0Kn9YDJtp0fc4Q7CuoV0X63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319', 'dm-uuid-LVM-9EM2pLoCc81f2X7Vie2gvZeoKVsOO03V8d2PXcJGe3Ps8WTrewvxmi6DdodPaJYy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd', 'dm-uuid-LVM-FzkgkoVfb2RnZHeeixvaBLUlzwoz3GmBKkIXQJrdo7uwaev79qNVS5X3yHIcAGus'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf', 'dm-uuid-LVM-2KOMfDqnadxchcrcgKh2pqnIyHgmTXEvWp5NIBI4IIH0Z87KkSOTClHBnaFxSsBv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hJi2Py-81jO-thE3-PeUa-ee3o-6IJn-t2lTlM', 'scsi-0QEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702', 'scsi-SQEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZIQTi-GOs5-CdQ0-0JfI-XO4A-PPhc-92rEKT', 'scsi-0QEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473', 'scsi-SQEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8', 'scsi-SQEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161552 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab', 'dm-uuid-LVM-rzWUWsHInSLRWdrp72kGd49H55Q2diyIak9DoOb0xRhEavC39dzPF5cbOf6a2zzB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1', 'dm-uuid-LVM-ETIN2cURdX3qKY8G784R8MS3Xrl7JPk1NOvKGIXLGbfYZvO5OlWQEi5VkrkwES6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161622 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.161627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161633 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161689 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161768 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161787 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161836 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161844 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eimw9W-MYPI-UA59-afLt-X9H7-b3VL-NmfrZ3', 'scsi-0QEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc', 'scsi-SQEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161853 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nHF385-RKkI-sMjx-wEKT-CEHl-TQSH-tknXGo', 'scsi-0QEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d', 'scsi-SQEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4', 'scsi-SQEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161962 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part1', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part14', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part15', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part16', 'scsi-SQEMU_QEMU_HARDDISK_b536afd9-feca-47a6-88b0-45d4e217eb34-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.161993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TSkgdd-4u32-vhZa-Igw8-mdVc-zUOc-5kXbMO', 'scsi-0QEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361', 'scsi-SQEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162003 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g33K7x-lc71-nV2n-50c4-euam-kr43-sc7tcb', 'scsi-0QEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301', 'scsi-SQEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162027 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53', 'scsi-SQEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162058 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162079 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162085 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162090 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162107 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162148 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162156 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162168 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ab8a212-6a66-4584-abca-b2e7ece64247-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162180 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162186 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162227 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162235 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162240 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162245 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162250 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162256 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162261 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162298 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162349 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162378 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162388 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part1', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part14', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part15', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part16', 'scsi-SQEMU_QEMU_HARDDISK_db05f358-997f-4d59-a241-e67298a52f64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162409 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:03:27.162415 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162420 | orchestrator | 2026-03-03 01:03:27.162466 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-03 01:03:27.162474 | orchestrator | Tuesday 03 March 2026 00:53:56 +0000 (0:00:01.665) 0:00:33.325 ********* 2026-03-03 01:03:27.162479 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.162486 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.162491 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.162496 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.162501 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.162506 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.162512 | orchestrator | 2026-03-03 01:03:27.162518 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-03 01:03:27.162522 | orchestrator | Tuesday 03 March 2026 00:53:57 +0000 (0:00:01.381) 0:00:34.706 ********* 2026-03-03 01:03:27.162528 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.162533 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.162538 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.162543 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.162548 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.162553 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.162558 | orchestrator | 2026-03-03 01:03:27.162563 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-03 01:03:27.162566 | orchestrator | Tuesday 03 March 2026 00:53:58 +0000 (0:00:00.724) 0:00:35.430 ********* 2026-03-03 01:03:27.162578 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.162581 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162584 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162588 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162591 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162594 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162597 | orchestrator | 2026-03-03 01:03:27.162600 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-03 01:03:27.162604 | orchestrator | Tuesday 03 March 2026 00:53:59 +0000 (0:00:01.042) 0:00:36.473 ********* 2026-03-03 01:03:27.162612 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.162615 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162618 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162621 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162625 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162628 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162631 | orchestrator | 2026-03-03 01:03:27.162634 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-03 01:03:27.162637 | orchestrator | Tuesday 03 March 2026 00:54:00 +0000 (0:00:00.881) 0:00:37.354 ********* 2026-03-03 01:03:27.162640 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162646 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.162649 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162652 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162655 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162659 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162662 | orchestrator | 2026-03-03 01:03:27.162665 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-03 01:03:27.162668 | orchestrator | Tuesday 03 March 2026 00:54:01 +0000 (0:00:01.522) 0:00:38.876 ********* 2026-03-03 01:03:27.162671 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.162674 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162678 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162681 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162684 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162687 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162690 | orchestrator | 2026-03-03 01:03:27.162693 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-03 01:03:27.162696 | orchestrator | Tuesday 03 March 2026 00:54:03 +0000 (0:00:01.548) 0:00:40.429 ********* 2026-03-03 01:03:27.162700 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-03 01:03:27.162703 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-03 01:03:27.162706 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-03 01:03:27.162709 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-03 01:03:27.162712 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-03 01:03:27.162715 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-03 01:03:27.162718 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-03 01:03:27.162721 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-03 01:03:27.162725 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-03 01:03:27.162728 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-03 01:03:27.162731 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-03 01:03:27.162734 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-03 01:03:27.162737 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-03 01:03:27.162740 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-03 01:03:27.162743 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-03 01:03:27.162746 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-03 01:03:27.162750 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-03 01:03:27.162755 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-03 01:03:27.162760 | orchestrator | 2026-03-03 01:03:27.162766 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-03 01:03:27.162771 | orchestrator | Tuesday 03 March 2026 00:54:08 +0000 (0:00:04.696) 0:00:45.126 ********* 2026-03-03 01:03:27.162776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-03 01:03:27.162781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-03 01:03:27.162786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-03 01:03:27.162796 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.162801 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-03 01:03:27.162806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-03 01:03:27.162812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-03 01:03:27.162818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-03 01:03:27.162844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-03 01:03:27.162850 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-03 01:03:27.162856 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162861 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-03 01:03:27.162871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-03 01:03:27.162876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-03 01:03:27.162881 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162886 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-03 01:03:27.162891 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-03 01:03:27.162896 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-03 01:03:27.162901 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162906 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-03 01:03:27.162911 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-03 01:03:27.162916 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-03 01:03:27.162922 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162927 | orchestrator | 2026-03-03 01:03:27.162932 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-03 01:03:27.162938 | orchestrator | Tuesday 03 March 2026 00:54:08 +0000 (0:00:00.757) 0:00:45.884 ********* 2026-03-03 01:03:27.162943 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.162948 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.162954 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.162960 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.162965 | orchestrator | 2026-03-03 01:03:27.162972 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-03 01:03:27.162979 | orchestrator | Tuesday 03 March 2026 00:54:10 +0000 (0:00:01.243) 0:00:47.127 ********* 2026-03-03 01:03:27.162983 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.162989 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.162994 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.162999 | orchestrator | 2026-03-03 01:03:27.163008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-03 01:03:27.163014 | orchestrator | Tuesday 03 March 2026 00:54:10 +0000 (0:00:00.385) 0:00:47.512 ********* 2026-03-03 01:03:27.163019 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163024 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163029 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163035 | orchestrator | 2026-03-03 01:03:27.163040 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-03 01:03:27.163045 | orchestrator | Tuesday 03 March 2026 00:54:10 +0000 (0:00:00.353) 0:00:47.865 ********* 2026-03-03 01:03:27.163051 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163056 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163062 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163079 | orchestrator | 2026-03-03 01:03:27.163085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-03 01:03:27.163091 | orchestrator | Tuesday 03 March 2026 00:54:12 +0000 (0:00:01.421) 0:00:49.287 ********* 2026-03-03 01:03:27.163096 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163107 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163112 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163117 | orchestrator | 2026-03-03 01:03:27.163123 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-03 01:03:27.163128 | orchestrator | Tuesday 03 March 2026 00:54:12 +0000 (0:00:00.553) 0:00:49.840 ********* 2026-03-03 01:03:27.163133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.163136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.163139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.163142 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163145 | orchestrator | 2026-03-03 01:03:27.163148 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-03 01:03:27.163152 | orchestrator | Tuesday 03 March 2026 00:54:13 +0000 (0:00:00.438) 0:00:50.279 ********* 2026-03-03 01:03:27.163155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.163158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.163161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.163164 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163167 | orchestrator | 2026-03-03 01:03:27.163170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-03 01:03:27.163173 | orchestrator | Tuesday 03 March 2026 00:54:13 +0000 (0:00:00.607) 0:00:50.886 ********* 2026-03-03 01:03:27.163176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.163180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.163183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.163186 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163189 | orchestrator | 2026-03-03 01:03:27.163192 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-03 01:03:27.163195 | orchestrator | Tuesday 03 March 2026 00:54:14 +0000 (0:00:00.630) 0:00:51.517 ********* 2026-03-03 01:03:27.163198 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163201 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163204 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163207 | orchestrator | 2026-03-03 01:03:27.163211 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-03 01:03:27.163214 | orchestrator | Tuesday 03 March 2026 00:54:14 +0000 (0:00:00.370) 0:00:51.888 ********* 2026-03-03 01:03:27.163217 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-03 01:03:27.163220 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-03 01:03:27.163242 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-03 01:03:27.163246 | orchestrator | 2026-03-03 01:03:27.163249 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-03 01:03:27.163252 | orchestrator | Tuesday 03 March 2026 00:54:16 +0000 (0:00:01.165) 0:00:53.053 ********* 2026-03-03 01:03:27.163255 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:03:27.163258 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:03:27.163261 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:03:27.163265 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-03 01:03:27.163269 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-03 01:03:27.163274 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-03 01:03:27.163281 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-03 01:03:27.163288 | orchestrator | 2026-03-03 01:03:27.163294 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-03 01:03:27.163299 | orchestrator | Tuesday 03 March 2026 00:54:16 +0000 (0:00:00.842) 0:00:53.896 ********* 2026-03-03 01:03:27.163309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:03:27.163314 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:03:27.163319 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:03:27.163324 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-03 01:03:27.163329 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-03 01:03:27.163335 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-03 01:03:27.163340 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-03 01:03:27.163345 | orchestrator | 2026-03-03 01:03:27.163350 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.163359 | orchestrator | Tuesday 03 March 2026 00:54:18 +0000 (0:00:01.742) 0:00:55.638 ********* 2026-03-03 01:03:27.163364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.163370 | orchestrator | 2026-03-03 01:03:27.163375 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.163380 | orchestrator | Tuesday 03 March 2026 00:54:19 +0000 (0:00:00.982) 0:00:56.620 ********* 2026-03-03 01:03:27.163385 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.163390 | orchestrator | 2026-03-03 01:03:27.163396 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.163401 | orchestrator | Tuesday 03 March 2026 00:54:20 +0000 (0:00:00.981) 0:00:57.602 ********* 2026-03-03 01:03:27.163406 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163412 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163417 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163422 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.163428 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.163433 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.163439 | orchestrator | 2026-03-03 01:03:27.163445 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.163451 | orchestrator | Tuesday 03 March 2026 00:54:21 +0000 (0:00:01.211) 0:00:58.813 ********* 2026-03-03 01:03:27.163454 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163458 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163461 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163464 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163467 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163470 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163473 | orchestrator | 2026-03-03 01:03:27.163476 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.163479 | orchestrator | Tuesday 03 March 2026 00:54:22 +0000 (0:00:00.737) 0:00:59.551 ********* 2026-03-03 01:03:27.163482 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163486 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163492 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163496 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163505 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163510 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163514 | orchestrator | 2026-03-03 01:03:27.163519 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.163524 | orchestrator | Tuesday 03 March 2026 00:54:23 +0000 (0:00:00.860) 0:01:00.412 ********* 2026-03-03 01:03:27.163529 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163534 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163539 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163549 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163554 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163558 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163563 | orchestrator | 2026-03-03 01:03:27.163568 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.163573 | orchestrator | Tuesday 03 March 2026 00:54:24 +0000 (0:00:00.702) 0:01:01.114 ********* 2026-03-03 01:03:27.163578 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163583 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163588 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163592 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.163598 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.163626 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.163633 | orchestrator | 2026-03-03 01:03:27.163638 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.163643 | orchestrator | Tuesday 03 March 2026 00:54:25 +0000 (0:00:01.132) 0:01:02.247 ********* 2026-03-03 01:03:27.163649 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163654 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163660 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163665 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163670 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163676 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163682 | orchestrator | 2026-03-03 01:03:27.163687 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.163693 | orchestrator | Tuesday 03 March 2026 00:54:25 +0000 (0:00:00.613) 0:01:02.860 ********* 2026-03-03 01:03:27.163697 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163701 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163705 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163708 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163712 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163716 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163720 | orchestrator | 2026-03-03 01:03:27.163723 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.163727 | orchestrator | Tuesday 03 March 2026 00:54:26 +0000 (0:00:00.687) 0:01:03.547 ********* 2026-03-03 01:03:27.163731 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163735 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163738 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163742 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.163746 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.163750 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.163753 | orchestrator | 2026-03-03 01:03:27.163757 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.163761 | orchestrator | Tuesday 03 March 2026 00:54:27 +0000 (0:00:01.258) 0:01:04.806 ********* 2026-03-03 01:03:27.163764 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163768 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163772 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163775 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.163779 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.163783 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.163787 | orchestrator | 2026-03-03 01:03:27.163790 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.163797 | orchestrator | Tuesday 03 March 2026 00:54:29 +0000 (0:00:01.462) 0:01:06.268 ********* 2026-03-03 01:03:27.163801 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163805 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163809 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163812 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163816 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163819 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163823 | orchestrator | 2026-03-03 01:03:27.163827 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.163836 | orchestrator | Tuesday 03 March 2026 00:54:29 +0000 (0:00:00.568) 0:01:06.837 ********* 2026-03-03 01:03:27.163840 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.163844 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.163847 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.163851 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.163855 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.163860 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.163867 | orchestrator | 2026-03-03 01:03:27.163874 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.163879 | orchestrator | Tuesday 03 March 2026 00:54:30 +0000 (0:00:00.710) 0:01:07.547 ********* 2026-03-03 01:03:27.163884 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163889 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163894 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163899 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163904 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163908 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163913 | orchestrator | 2026-03-03 01:03:27.163918 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.163923 | orchestrator | Tuesday 03 March 2026 00:54:31 +0000 (0:00:00.571) 0:01:08.119 ********* 2026-03-03 01:03:27.163928 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163932 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163937 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163942 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163947 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.163952 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.163957 | orchestrator | 2026-03-03 01:03:27.163963 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.163968 | orchestrator | Tuesday 03 March 2026 00:54:31 +0000 (0:00:00.618) 0:01:08.737 ********* 2026-03-03 01:03:27.163973 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.163979 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.163984 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.163989 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.163994 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164000 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164005 | orchestrator | 2026-03-03 01:03:27.164011 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.164014 | orchestrator | Tuesday 03 March 2026 00:54:32 +0000 (0:00:00.555) 0:01:09.293 ********* 2026-03-03 01:03:27.164017 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164022 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164027 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164033 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164040 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164045 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164050 | orchestrator | 2026-03-03 01:03:27.164055 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.164059 | orchestrator | Tuesday 03 March 2026 00:54:32 +0000 (0:00:00.653) 0:01:09.947 ********* 2026-03-03 01:03:27.164076 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164082 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164086 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164092 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164119 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164126 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164131 | orchestrator | 2026-03-03 01:03:27.164136 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.164141 | orchestrator | Tuesday 03 March 2026 00:54:33 +0000 (0:00:00.979) 0:01:10.927 ********* 2026-03-03 01:03:27.164146 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164157 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164162 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164167 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.164172 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.164178 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.164181 | orchestrator | 2026-03-03 01:03:27.164184 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.164187 | orchestrator | Tuesday 03 March 2026 00:54:34 +0000 (0:00:00.807) 0:01:11.734 ********* 2026-03-03 01:03:27.164190 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.164194 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.164197 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.164200 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.164203 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.164206 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.164209 | orchestrator | 2026-03-03 01:03:27.164212 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.164216 | orchestrator | Tuesday 03 March 2026 00:54:35 +0000 (0:00:01.016) 0:01:12.750 ********* 2026-03-03 01:03:27.164219 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.164222 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.164225 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.164228 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.164231 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.164234 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.164237 | orchestrator | 2026-03-03 01:03:27.164241 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-03 01:03:27.164244 | orchestrator | Tuesday 03 March 2026 00:54:37 +0000 (0:00:02.180) 0:01:14.931 ********* 2026-03-03 01:03:27.164247 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.164250 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.164253 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.164256 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.164259 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.164262 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.164265 | orchestrator | 2026-03-03 01:03:27.164268 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-03 01:03:27.164280 | orchestrator | Tuesday 03 March 2026 00:54:39 +0000 (0:00:01.855) 0:01:16.786 ********* 2026-03-03 01:03:27.164284 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.164287 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.164290 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.164293 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.164296 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.164299 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.164302 | orchestrator | 2026-03-03 01:03:27.164307 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-03 01:03:27.164314 | orchestrator | Tuesday 03 March 2026 00:54:42 +0000 (0:00:02.293) 0:01:19.080 ********* 2026-03-03 01:03:27.164321 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.164328 | orchestrator | 2026-03-03 01:03:27.164332 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-03 01:03:27.164337 | orchestrator | Tuesday 03 March 2026 00:54:43 +0000 (0:00:01.134) 0:01:20.214 ********* 2026-03-03 01:03:27.164342 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164347 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164352 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164357 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164365 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164370 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164375 | orchestrator | 2026-03-03 01:03:27.164380 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-03 01:03:27.164385 | orchestrator | Tuesday 03 March 2026 00:54:43 +0000 (0:00:00.561) 0:01:20.776 ********* 2026-03-03 01:03:27.164395 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164400 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164405 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164409 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164412 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164415 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164418 | orchestrator | 2026-03-03 01:03:27.164421 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-03 01:03:27.164424 | orchestrator | Tuesday 03 March 2026 00:54:44 +0000 (0:00:01.123) 0:01:21.900 ********* 2026-03-03 01:03:27.164428 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-03 01:03:27.164431 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-03 01:03:27.164434 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-03 01:03:27.164437 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-03 01:03:27.164440 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-03 01:03:27.164443 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-03 01:03:27.164446 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-03 01:03:27.164451 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-03 01:03:27.164456 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-03 01:03:27.164462 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-03 01:03:27.164489 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-03 01:03:27.164495 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-03 01:03:27.164499 | orchestrator | 2026-03-03 01:03:27.164504 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-03 01:03:27.164509 | orchestrator | Tuesday 03 March 2026 00:54:46 +0000 (0:00:01.471) 0:01:23.372 ********* 2026-03-03 01:03:27.164513 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.164518 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.164523 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.164528 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.164533 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.164538 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.164544 | orchestrator | 2026-03-03 01:03:27.164548 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-03 01:03:27.164551 | orchestrator | Tuesday 03 March 2026 00:54:47 +0000 (0:00:01.411) 0:01:24.783 ********* 2026-03-03 01:03:27.164554 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164557 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164560 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164563 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164566 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164569 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164572 | orchestrator | 2026-03-03 01:03:27.164576 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-03 01:03:27.164579 | orchestrator | Tuesday 03 March 2026 00:54:48 +0000 (0:00:00.604) 0:01:25.388 ********* 2026-03-03 01:03:27.164582 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164585 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164588 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164591 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164594 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164597 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164604 | orchestrator | 2026-03-03 01:03:27.164607 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-03 01:03:27.164610 | orchestrator | Tuesday 03 March 2026 00:54:49 +0000 (0:00:00.745) 0:01:26.134 ********* 2026-03-03 01:03:27.164613 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164617 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164620 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164626 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164629 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164632 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164635 | orchestrator | 2026-03-03 01:03:27.164638 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-03 01:03:27.164641 | orchestrator | Tuesday 03 March 2026 00:54:49 +0000 (0:00:00.565) 0:01:26.699 ********* 2026-03-03 01:03:27.164645 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.164648 | orchestrator | 2026-03-03 01:03:27.164651 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-03 01:03:27.164654 | orchestrator | Tuesday 03 March 2026 00:54:51 +0000 (0:00:01.368) 0:01:28.068 ********* 2026-03-03 01:03:27.164657 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.164661 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.164664 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.164667 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.164670 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.164673 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.164676 | orchestrator | 2026-03-03 01:03:27.164679 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-03 01:03:27.164682 | orchestrator | Tuesday 03 March 2026 00:55:28 +0000 (0:00:37.013) 0:02:05.081 ********* 2026-03-03 01:03:27.164686 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-03 01:03:27.164689 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-03 01:03:27.164692 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-03 01:03:27.164695 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164698 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-03 01:03:27.164701 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-03 01:03:27.164704 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-03 01:03:27.164708 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164711 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-03 01:03:27.164714 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-03 01:03:27.164717 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-03 01:03:27.164720 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164723 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-03 01:03:27.164726 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-03 01:03:27.164729 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-03 01:03:27.164733 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164736 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-03 01:03:27.164739 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-03 01:03:27.164742 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-03 01:03:27.164745 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164760 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-03 01:03:27.164767 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-03 01:03:27.164770 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-03 01:03:27.164773 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164777 | orchestrator | 2026-03-03 01:03:27.164780 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-03 01:03:27.164783 | orchestrator | Tuesday 03 March 2026 00:55:28 +0000 (0:00:00.640) 0:02:05.721 ********* 2026-03-03 01:03:27.164786 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164789 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164792 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164795 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164798 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164801 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164805 | orchestrator | 2026-03-03 01:03:27.164808 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-03 01:03:27.164811 | orchestrator | Tuesday 03 March 2026 00:55:29 +0000 (0:00:00.754) 0:02:06.476 ********* 2026-03-03 01:03:27.164814 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164817 | orchestrator | 2026-03-03 01:03:27.164820 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-03 01:03:27.164823 | orchestrator | Tuesday 03 March 2026 00:55:29 +0000 (0:00:00.159) 0:02:06.636 ********* 2026-03-03 01:03:27.164826 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164829 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164833 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164836 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164839 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164842 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164845 | orchestrator | 2026-03-03 01:03:27.164848 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-03 01:03:27.164851 | orchestrator | Tuesday 03 March 2026 00:55:30 +0000 (0:00:00.584) 0:02:07.220 ********* 2026-03-03 01:03:27.164854 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164858 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164861 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164864 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164867 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164870 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164873 | orchestrator | 2026-03-03 01:03:27.164878 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-03 01:03:27.164881 | orchestrator | Tuesday 03 March 2026 00:55:31 +0000 (0:00:00.793) 0:02:08.014 ********* 2026-03-03 01:03:27.164884 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164887 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164890 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164893 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164897 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.164900 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.164903 | orchestrator | 2026-03-03 01:03:27.164906 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-03 01:03:27.164909 | orchestrator | Tuesday 03 March 2026 00:55:31 +0000 (0:00:00.559) 0:02:08.574 ********* 2026-03-03 01:03:27.164912 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.164915 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.164918 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.164922 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.164925 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.164928 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.164931 | orchestrator | 2026-03-03 01:03:27.164934 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-03 01:03:27.164937 | orchestrator | Tuesday 03 March 2026 00:55:33 +0000 (0:00:02.042) 0:02:10.616 ********* 2026-03-03 01:03:27.164943 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.164946 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.164949 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.164952 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.164955 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.164958 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.164961 | orchestrator | 2026-03-03 01:03:27.164964 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-03 01:03:27.164968 | orchestrator | Tuesday 03 March 2026 00:55:34 +0000 (0:00:00.639) 0:02:11.256 ********* 2026-03-03 01:03:27.164971 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.164975 | orchestrator | 2026-03-03 01:03:27.164978 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-03 01:03:27.164981 | orchestrator | Tuesday 03 March 2026 00:55:35 +0000 (0:00:01.208) 0:02:12.465 ********* 2026-03-03 01:03:27.164984 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.164987 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.164991 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.164994 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.164997 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165000 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165003 | orchestrator | 2026-03-03 01:03:27.165006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-03 01:03:27.165009 | orchestrator | Tuesday 03 March 2026 00:55:36 +0000 (0:00:00.802) 0:02:13.268 ********* 2026-03-03 01:03:27.165012 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165016 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165019 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165022 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165025 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165028 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165031 | orchestrator | 2026-03-03 01:03:27.165034 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-03 01:03:27.165037 | orchestrator | Tuesday 03 March 2026 00:55:36 +0000 (0:00:00.535) 0:02:13.803 ********* 2026-03-03 01:03:27.165041 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165044 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165055 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165059 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165062 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165095 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165100 | orchestrator | 2026-03-03 01:03:27.165106 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-03 01:03:27.165111 | orchestrator | Tuesday 03 March 2026 00:55:37 +0000 (0:00:00.676) 0:02:14.480 ********* 2026-03-03 01:03:27.165117 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165120 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165123 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165126 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165129 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165132 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165135 | orchestrator | 2026-03-03 01:03:27.165139 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-03 01:03:27.165142 | orchestrator | Tuesday 03 March 2026 00:55:38 +0000 (0:00:00.557) 0:02:15.038 ********* 2026-03-03 01:03:27.165145 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165148 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165151 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165154 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165157 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165160 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165167 | orchestrator | 2026-03-03 01:03:27.165170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-03 01:03:27.165173 | orchestrator | Tuesday 03 March 2026 00:55:38 +0000 (0:00:00.663) 0:02:15.702 ********* 2026-03-03 01:03:27.165176 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165179 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165182 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165185 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165188 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165192 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165195 | orchestrator | 2026-03-03 01:03:27.165198 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-03 01:03:27.165201 | orchestrator | Tuesday 03 March 2026 00:55:39 +0000 (0:00:00.528) 0:02:16.230 ********* 2026-03-03 01:03:27.165204 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165207 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165210 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165213 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165216 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165221 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165225 | orchestrator | 2026-03-03 01:03:27.165228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-03 01:03:27.165231 | orchestrator | Tuesday 03 March 2026 00:55:39 +0000 (0:00:00.599) 0:02:16.829 ********* 2026-03-03 01:03:27.165234 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165237 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165240 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165243 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165246 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165249 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165252 | orchestrator | 2026-03-03 01:03:27.165255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-03 01:03:27.165259 | orchestrator | Tuesday 03 March 2026 00:55:40 +0000 (0:00:00.461) 0:02:17.290 ********* 2026-03-03 01:03:27.165262 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.165265 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.165268 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.165271 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.165274 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.165277 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.165280 | orchestrator | 2026-03-03 01:03:27.165283 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-03 01:03:27.165287 | orchestrator | Tuesday 03 March 2026 00:55:41 +0000 (0:00:00.921) 0:02:18.211 ********* 2026-03-03 01:03:27.165290 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.165293 | orchestrator | 2026-03-03 01:03:27.165296 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-03 01:03:27.165299 | orchestrator | Tuesday 03 March 2026 00:55:42 +0000 (0:00:00.891) 0:02:19.103 ********* 2026-03-03 01:03:27.165303 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-03 01:03:27.165306 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-03 01:03:27.165309 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-03 01:03:27.165312 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-03 01:03:27.165315 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-03 01:03:27.165319 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-03 01:03:27.165322 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-03 01:03:27.165325 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-03 01:03:27.165328 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-03 01:03:27.165331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-03 01:03:27.165336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-03 01:03:27.165340 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-03 01:03:27.165343 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-03 01:03:27.165346 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-03 01:03:27.165349 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-03 01:03:27.165352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-03 01:03:27.165355 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-03 01:03:27.165358 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-03 01:03:27.165372 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-03 01:03:27.165376 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-03 01:03:27.165379 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-03 01:03:27.165382 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-03 01:03:27.165385 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-03 01:03:27.165388 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-03 01:03:27.165391 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-03 01:03:27.165395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-03 01:03:27.165398 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-03 01:03:27.165401 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-03 01:03:27.165404 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-03 01:03:27.165407 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-03 01:03:27.165410 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-03 01:03:27.165413 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-03 01:03:27.165416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-03 01:03:27.165419 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-03 01:03:27.165423 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-03 01:03:27.165426 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-03 01:03:27.165429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-03 01:03:27.165432 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-03 01:03:27.165435 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-03 01:03:27.165438 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-03 01:03:27.165442 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-03 01:03:27.165445 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-03 01:03:27.165448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-03 01:03:27.165453 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-03 01:03:27.165456 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-03 01:03:27.165459 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-03 01:03:27.165463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-03 01:03:27.165466 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-03 01:03:27.165469 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-03 01:03:27.165472 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-03 01:03:27.165475 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-03 01:03:27.165478 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-03 01:03:27.165481 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-03 01:03:27.165487 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-03 01:03:27.165490 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-03 01:03:27.165493 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-03 01:03:27.165496 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-03 01:03:27.165499 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-03 01:03:27.165502 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-03 01:03:27.165505 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-03 01:03:27.165508 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-03 01:03:27.165512 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-03 01:03:27.165515 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-03 01:03:27.165518 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-03 01:03:27.165521 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-03 01:03:27.165524 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-03 01:03:27.165527 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-03 01:03:27.165530 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-03 01:03:27.165533 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-03 01:03:27.165536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-03 01:03:27.165540 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-03 01:03:27.165543 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-03 01:03:27.165546 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-03 01:03:27.165549 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-03 01:03:27.165552 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-03 01:03:27.165555 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-03 01:03:27.165567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-03 01:03:27.165571 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-03 01:03:27.165574 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-03 01:03:27.165577 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-03 01:03:27.165580 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-03 01:03:27.165584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-03 01:03:27.165587 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-03 01:03:27.165590 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-03 01:03:27.165593 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-03 01:03:27.165596 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-03 01:03:27.165599 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-03 01:03:27.165602 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-03 01:03:27.165605 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-03 01:03:27.165608 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-03 01:03:27.165611 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-03 01:03:27.165614 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-03 01:03:27.165618 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-03 01:03:27.165623 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-03 01:03:27.165626 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-03 01:03:27.165629 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-03 01:03:27.165632 | orchestrator | 2026-03-03 01:03:27.165635 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-03 01:03:27.165638 | orchestrator | Tuesday 03 March 2026 00:55:48 +0000 (0:00:06.836) 0:02:25.940 ********* 2026-03-03 01:03:27.165641 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165644 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165648 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165655 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.165658 | orchestrator | 2026-03-03 01:03:27.165661 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-03 01:03:27.165664 | orchestrator | Tuesday 03 March 2026 00:55:49 +0000 (0:00:00.757) 0:02:26.697 ********* 2026-03-03 01:03:27.165667 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.165671 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.165674 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.165677 | orchestrator | 2026-03-03 01:03:27.165681 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-03 01:03:27.165684 | orchestrator | Tuesday 03 March 2026 00:55:50 +0000 (0:00:00.763) 0:02:27.461 ********* 2026-03-03 01:03:27.165687 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.165690 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.165693 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.165696 | orchestrator | 2026-03-03 01:03:27.165699 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-03 01:03:27.165703 | orchestrator | Tuesday 03 March 2026 00:55:51 +0000 (0:00:01.349) 0:02:28.811 ********* 2026-03-03 01:03:27.165706 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.165709 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.165712 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.165715 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165718 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165721 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165724 | orchestrator | 2026-03-03 01:03:27.165727 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-03 01:03:27.165731 | orchestrator | Tuesday 03 March 2026 00:55:52 +0000 (0:00:01.069) 0:02:29.881 ********* 2026-03-03 01:03:27.165734 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.165737 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.165740 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165743 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.165746 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165749 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165752 | orchestrator | 2026-03-03 01:03:27.165755 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-03 01:03:27.165759 | orchestrator | Tuesday 03 March 2026 00:55:53 +0000 (0:00:00.969) 0:02:30.851 ********* 2026-03-03 01:03:27.165762 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165765 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165773 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165776 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165779 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165782 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165785 | orchestrator | 2026-03-03 01:03:27.165797 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-03 01:03:27.165801 | orchestrator | Tuesday 03 March 2026 00:55:54 +0000 (0:00:00.616) 0:02:31.467 ********* 2026-03-03 01:03:27.165804 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165807 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165810 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165813 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165816 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165820 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165823 | orchestrator | 2026-03-03 01:03:27.165826 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-03 01:03:27.165829 | orchestrator | Tuesday 03 March 2026 00:55:55 +0000 (0:00:00.644) 0:02:32.111 ********* 2026-03-03 01:03:27.165832 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165835 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165838 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165841 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165844 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165847 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165850 | orchestrator | 2026-03-03 01:03:27.165854 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-03 01:03:27.165857 | orchestrator | Tuesday 03 March 2026 00:55:55 +0000 (0:00:00.586) 0:02:32.698 ********* 2026-03-03 01:03:27.165860 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165863 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165866 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165869 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165872 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165875 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165879 | orchestrator | 2026-03-03 01:03:27.165883 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-03 01:03:27.165888 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:00.701) 0:02:33.399 ********* 2026-03-03 01:03:27.165894 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165901 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165906 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165911 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165917 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165924 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165929 | orchestrator | 2026-03-03 01:03:27.165938 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-03 01:03:27.165943 | orchestrator | Tuesday 03 March 2026 00:55:56 +0000 (0:00:00.486) 0:02:33.886 ********* 2026-03-03 01:03:27.165950 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.165955 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.165961 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.165969 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.165978 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.165986 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.165994 | orchestrator | 2026-03-03 01:03:27.166003 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-03 01:03:27.166052 | orchestrator | Tuesday 03 March 2026 00:55:57 +0000 (0:00:00.918) 0:02:34.805 ********* 2026-03-03 01:03:27.166062 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166100 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166108 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166123 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.166130 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.166137 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.166144 | orchestrator | 2026-03-03 01:03:27.166151 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-03 01:03:27.166159 | orchestrator | Tuesday 03 March 2026 00:56:00 +0000 (0:00:02.987) 0:02:37.793 ********* 2026-03-03 01:03:27.166166 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.166173 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.166179 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.166186 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166194 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166201 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166208 | orchestrator | 2026-03-03 01:03:27.166215 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-03 01:03:27.166222 | orchestrator | Tuesday 03 March 2026 00:56:01 +0000 (0:00:00.816) 0:02:38.609 ********* 2026-03-03 01:03:27.166229 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.166236 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.166242 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.166250 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166257 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166264 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166271 | orchestrator | 2026-03-03 01:03:27.166276 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-03 01:03:27.166281 | orchestrator | Tuesday 03 March 2026 00:56:02 +0000 (0:00:00.715) 0:02:39.325 ********* 2026-03-03 01:03:27.166286 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166292 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166297 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166302 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166308 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166316 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166322 | orchestrator | 2026-03-03 01:03:27.166328 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-03 01:03:27.166334 | orchestrator | Tuesday 03 March 2026 00:56:03 +0000 (0:00:00.847) 0:02:40.172 ********* 2026-03-03 01:03:27.166341 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.166348 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.166354 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.166360 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166397 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166405 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166411 | orchestrator | 2026-03-03 01:03:27.166417 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-03 01:03:27.166424 | orchestrator | Tuesday 03 March 2026 00:56:03 +0000 (0:00:00.643) 0:02:40.816 ********* 2026-03-03 01:03:27.166431 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-03 01:03:27.166440 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-03 01:03:27.166451 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166458 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-03 01:03:27.166468 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-03 01:03:27.166477 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-03 01:03:27.166482 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-03 01:03:27.166487 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166492 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166497 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166501 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166506 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166511 | orchestrator | 2026-03-03 01:03:27.166516 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-03 01:03:27.166520 | orchestrator | Tuesday 03 March 2026 00:56:04 +0000 (0:00:01.168) 0:02:41.984 ********* 2026-03-03 01:03:27.166525 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166529 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166533 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166538 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166542 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166547 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166552 | orchestrator | 2026-03-03 01:03:27.166556 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-03 01:03:27.166561 | orchestrator | Tuesday 03 March 2026 00:56:05 +0000 (0:00:00.811) 0:02:42.795 ********* 2026-03-03 01:03:27.166565 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166571 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166575 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166580 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166585 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166589 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166594 | orchestrator | 2026-03-03 01:03:27.166599 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-03 01:03:27.166605 | orchestrator | Tuesday 03 March 2026 00:56:06 +0000 (0:00:00.793) 0:02:43.589 ********* 2026-03-03 01:03:27.166611 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166615 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166620 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166626 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166631 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166636 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166641 | orchestrator | 2026-03-03 01:03:27.166646 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-03 01:03:27.166651 | orchestrator | Tuesday 03 March 2026 00:56:07 +0000 (0:00:00.600) 0:02:44.189 ********* 2026-03-03 01:03:27.166656 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166661 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166666 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166676 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166681 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166687 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166692 | orchestrator | 2026-03-03 01:03:27.166697 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-03 01:03:27.166723 | orchestrator | Tuesday 03 March 2026 00:56:08 +0000 (0:00:00.847) 0:02:45.037 ********* 2026-03-03 01:03:27.166727 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166730 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.166734 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.166737 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166740 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166743 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166746 | orchestrator | 2026-03-03 01:03:27.166750 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-03 01:03:27.166755 | orchestrator | Tuesday 03 March 2026 00:56:08 +0000 (0:00:00.887) 0:02:45.925 ********* 2026-03-03 01:03:27.166760 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.166766 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.166771 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166776 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166782 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166788 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.166794 | orchestrator | 2026-03-03 01:03:27.166800 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-03 01:03:27.166806 | orchestrator | Tuesday 03 March 2026 00:56:09 +0000 (0:00:00.928) 0:02:46.853 ********* 2026-03-03 01:03:27.166811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.166818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.166823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.166829 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166835 | orchestrator | 2026-03-03 01:03:27.166840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-03 01:03:27.166846 | orchestrator | Tuesday 03 March 2026 00:56:10 +0000 (0:00:00.297) 0:02:47.150 ********* 2026-03-03 01:03:27.166852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.166857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.166864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.166869 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166876 | orchestrator | 2026-03-03 01:03:27.166882 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-03 01:03:27.166887 | orchestrator | Tuesday 03 March 2026 00:56:10 +0000 (0:00:00.379) 0:02:47.530 ********* 2026-03-03 01:03:27.166896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.166902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.166906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.166911 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.166916 | orchestrator | 2026-03-03 01:03:27.166921 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-03 01:03:27.166927 | orchestrator | Tuesday 03 March 2026 00:56:10 +0000 (0:00:00.325) 0:02:47.856 ********* 2026-03-03 01:03:27.166931 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.166936 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.166941 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.166945 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.166950 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.166955 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.166959 | orchestrator | 2026-03-03 01:03:27.166963 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-03 01:03:27.166975 | orchestrator | Tuesday 03 March 2026 00:56:11 +0000 (0:00:00.675) 0:02:48.532 ********* 2026-03-03 01:03:27.166980 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-03 01:03:27.166985 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-03 01:03:27.166990 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-03 01:03:27.166995 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-03 01:03:27.167001 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.167006 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-03 01:03:27.167011 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.167015 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-03 01:03:27.167020 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.167025 | orchestrator | 2026-03-03 01:03:27.167030 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-03 01:03:27.167035 | orchestrator | Tuesday 03 March 2026 00:56:13 +0000 (0:00:01.775) 0:02:50.307 ********* 2026-03-03 01:03:27.167040 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.167045 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.167050 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.167055 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.167061 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.167079 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.167085 | orchestrator | 2026-03-03 01:03:27.167090 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-03 01:03:27.167095 | orchestrator | Tuesday 03 March 2026 00:56:15 +0000 (0:00:02.608) 0:02:52.916 ********* 2026-03-03 01:03:27.167100 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.167105 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.167110 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.167115 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.167120 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.167124 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.167130 | orchestrator | 2026-03-03 01:03:27.167135 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-03 01:03:27.167140 | orchestrator | Tuesday 03 March 2026 00:56:16 +0000 (0:00:01.061) 0:02:53.978 ********* 2026-03-03 01:03:27.167144 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167151 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.167154 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.167157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.167160 | orchestrator | 2026-03-03 01:03:27.167164 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-03 01:03:27.167189 | orchestrator | Tuesday 03 March 2026 00:56:17 +0000 (0:00:00.957) 0:02:54.936 ********* 2026-03-03 01:03:27.167193 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.167198 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.167203 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.167209 | orchestrator | 2026-03-03 01:03:27.167214 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-03 01:03:27.167220 | orchestrator | Tuesday 03 March 2026 00:56:18 +0000 (0:00:00.285) 0:02:55.222 ********* 2026-03-03 01:03:27.167225 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.167231 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.167236 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.167242 | orchestrator | 2026-03-03 01:03:27.167248 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-03 01:03:27.167254 | orchestrator | Tuesday 03 March 2026 00:56:19 +0000 (0:00:01.696) 0:02:56.918 ********* 2026-03-03 01:03:27.167260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-03 01:03:27.167266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-03 01:03:27.167271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-03 01:03:27.167277 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.167285 | orchestrator | 2026-03-03 01:03:27.167288 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-03 01:03:27.167292 | orchestrator | Tuesday 03 March 2026 00:56:20 +0000 (0:00:01.024) 0:02:57.943 ********* 2026-03-03 01:03:27.167296 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.167301 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.167306 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.167311 | orchestrator | 2026-03-03 01:03:27.167316 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-03 01:03:27.167321 | orchestrator | Tuesday 03 March 2026 00:56:21 +0000 (0:00:00.692) 0:02:58.635 ********* 2026-03-03 01:03:27.167326 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.167331 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.167336 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.167342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.167347 | orchestrator | 2026-03-03 01:03:27.167352 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-03 01:03:27.167357 | orchestrator | Tuesday 03 March 2026 00:56:22 +0000 (0:00:00.866) 0:02:59.502 ********* 2026-03-03 01:03:27.167366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.167372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.167377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.167382 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167387 | orchestrator | 2026-03-03 01:03:27.167392 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-03 01:03:27.167398 | orchestrator | Tuesday 03 March 2026 00:56:23 +0000 (0:00:01.012) 0:03:00.514 ********* 2026-03-03 01:03:27.167402 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167407 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.167412 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.167417 | orchestrator | 2026-03-03 01:03:27.167422 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-03 01:03:27.167428 | orchestrator | Tuesday 03 March 2026 00:56:23 +0000 (0:00:00.402) 0:03:00.917 ********* 2026-03-03 01:03:27.167433 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167438 | orchestrator | 2026-03-03 01:03:27.167443 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-03 01:03:27.167449 | orchestrator | Tuesday 03 March 2026 00:56:24 +0000 (0:00:00.238) 0:03:01.156 ********* 2026-03-03 01:03:27.167454 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167459 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.167465 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.167470 | orchestrator | 2026-03-03 01:03:27.167475 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-03 01:03:27.167480 | orchestrator | Tuesday 03 March 2026 00:56:24 +0000 (0:00:00.230) 0:03:01.386 ********* 2026-03-03 01:03:27.167486 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167491 | orchestrator | 2026-03-03 01:03:27.167496 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-03 01:03:27.167502 | orchestrator | Tuesday 03 March 2026 00:56:24 +0000 (0:00:00.155) 0:03:01.542 ********* 2026-03-03 01:03:27.167507 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167512 | orchestrator | 2026-03-03 01:03:27.167518 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-03 01:03:27.167523 | orchestrator | Tuesday 03 March 2026 00:56:24 +0000 (0:00:00.183) 0:03:01.725 ********* 2026-03-03 01:03:27.167528 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167533 | orchestrator | 2026-03-03 01:03:27.167539 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-03 01:03:27.167544 | orchestrator | Tuesday 03 March 2026 00:56:24 +0000 (0:00:00.091) 0:03:01.817 ********* 2026-03-03 01:03:27.167554 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167559 | orchestrator | 2026-03-03 01:03:27.167565 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-03 01:03:27.167570 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:00.189) 0:03:02.006 ********* 2026-03-03 01:03:27.167576 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167581 | orchestrator | 2026-03-03 01:03:27.167587 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-03 01:03:27.167592 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:00.494) 0:03:02.500 ********* 2026-03-03 01:03:27.167598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.167603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.167609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.167614 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167620 | orchestrator | 2026-03-03 01:03:27.167625 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-03 01:03:27.167650 | orchestrator | Tuesday 03 March 2026 00:56:25 +0000 (0:00:00.379) 0:03:02.879 ********* 2026-03-03 01:03:27.167657 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167662 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.167667 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.167672 | orchestrator | 2026-03-03 01:03:27.167677 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-03 01:03:27.167683 | orchestrator | Tuesday 03 March 2026 00:56:26 +0000 (0:00:00.291) 0:03:03.170 ********* 2026-03-03 01:03:27.167688 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167693 | orchestrator | 2026-03-03 01:03:27.167698 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-03 01:03:27.167703 | orchestrator | Tuesday 03 March 2026 00:56:26 +0000 (0:00:00.176) 0:03:03.347 ********* 2026-03-03 01:03:27.167708 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167713 | orchestrator | 2026-03-03 01:03:27.167718 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-03 01:03:27.167723 | orchestrator | Tuesday 03 March 2026 00:56:26 +0000 (0:00:00.190) 0:03:03.537 ********* 2026-03-03 01:03:27.167729 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.167734 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.167740 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.167745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.167750 | orchestrator | 2026-03-03 01:03:27.167756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-03 01:03:27.167761 | orchestrator | Tuesday 03 March 2026 00:56:27 +0000 (0:00:00.855) 0:03:04.392 ********* 2026-03-03 01:03:27.167766 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.167771 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.167776 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.167781 | orchestrator | 2026-03-03 01:03:27.167786 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-03 01:03:27.167791 | orchestrator | Tuesday 03 March 2026 00:56:27 +0000 (0:00:00.294) 0:03:04.687 ********* 2026-03-03 01:03:27.167796 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.167801 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.167806 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.167812 | orchestrator | 2026-03-03 01:03:27.167817 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-03 01:03:27.167822 | orchestrator | Tuesday 03 March 2026 00:56:28 +0000 (0:00:01.087) 0:03:05.775 ********* 2026-03-03 01:03:27.167832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.167837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.167842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.167847 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.167856 | orchestrator | 2026-03-03 01:03:27.167861 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-03 01:03:27.167867 | orchestrator | Tuesday 03 March 2026 00:56:29 +0000 (0:00:00.702) 0:03:06.477 ********* 2026-03-03 01:03:27.167872 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.167877 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.167882 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.167887 | orchestrator | 2026-03-03 01:03:27.167893 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-03 01:03:27.167898 | orchestrator | Tuesday 03 March 2026 00:56:29 +0000 (0:00:00.502) 0:03:06.980 ********* 2026-03-03 01:03:27.167903 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.167908 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.167913 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.167918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.167923 | orchestrator | 2026-03-03 01:03:27.167929 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-03 01:03:27.167934 | orchestrator | Tuesday 03 March 2026 00:56:30 +0000 (0:00:00.795) 0:03:07.776 ********* 2026-03-03 01:03:27.167939 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.167944 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.167950 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.167955 | orchestrator | 2026-03-03 01:03:27.167960 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-03 01:03:27.167965 | orchestrator | Tuesday 03 March 2026 00:56:31 +0000 (0:00:00.450) 0:03:08.226 ********* 2026-03-03 01:03:27.167970 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.167975 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.167980 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.167986 | orchestrator | 2026-03-03 01:03:27.167991 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-03 01:03:27.167996 | orchestrator | Tuesday 03 March 2026 00:56:32 +0000 (0:00:01.260) 0:03:09.487 ********* 2026-03-03 01:03:27.168002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.168007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.168012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.168017 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.168022 | orchestrator | 2026-03-03 01:03:27.168027 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-03 01:03:27.168032 | orchestrator | Tuesday 03 March 2026 00:56:33 +0000 (0:00:00.735) 0:03:10.223 ********* 2026-03-03 01:03:27.168038 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.168043 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.168048 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.168053 | orchestrator | 2026-03-03 01:03:27.168058 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-03 01:03:27.168091 | orchestrator | Tuesday 03 March 2026 00:56:33 +0000 (0:00:00.397) 0:03:10.620 ********* 2026-03-03 01:03:27.168097 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.168102 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.168106 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.168111 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168116 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168140 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168146 | orchestrator | 2026-03-03 01:03:27.168152 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-03 01:03:27.168157 | orchestrator | Tuesday 03 March 2026 00:56:34 +0000 (0:00:00.823) 0:03:11.443 ********* 2026-03-03 01:03:27.168162 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.168167 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.168172 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.168181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.168187 | orchestrator | 2026-03-03 01:03:27.168192 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-03 01:03:27.168197 | orchestrator | Tuesday 03 March 2026 00:56:35 +0000 (0:00:00.783) 0:03:12.227 ********* 2026-03-03 01:03:27.168202 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168207 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168212 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168217 | orchestrator | 2026-03-03 01:03:27.168223 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-03 01:03:27.168228 | orchestrator | Tuesday 03 March 2026 00:56:35 +0000 (0:00:00.478) 0:03:12.705 ********* 2026-03-03 01:03:27.168234 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.168239 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.168244 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.168249 | orchestrator | 2026-03-03 01:03:27.168254 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-03 01:03:27.168259 | orchestrator | Tuesday 03 March 2026 00:56:37 +0000 (0:00:01.332) 0:03:14.038 ********* 2026-03-03 01:03:27.168265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-03 01:03:27.168270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-03 01:03:27.168275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-03 01:03:27.168280 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168285 | orchestrator | 2026-03-03 01:03:27.168291 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-03 01:03:27.168296 | orchestrator | Tuesday 03 March 2026 00:56:37 +0000 (0:00:00.616) 0:03:14.655 ********* 2026-03-03 01:03:27.168301 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168307 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168312 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168317 | orchestrator | 2026-03-03 01:03:27.168328 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-03 01:03:27.168334 | orchestrator | 2026-03-03 01:03:27.168339 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.168345 | orchestrator | Tuesday 03 March 2026 00:56:38 +0000 (0:00:00.539) 0:03:15.195 ********* 2026-03-03 01:03:27.168350 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.168356 | orchestrator | 2026-03-03 01:03:27.168361 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.168366 | orchestrator | Tuesday 03 March 2026 00:56:38 +0000 (0:00:00.626) 0:03:15.821 ********* 2026-03-03 01:03:27.168372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.168377 | orchestrator | 2026-03-03 01:03:27.168382 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.168388 | orchestrator | Tuesday 03 March 2026 00:56:39 +0000 (0:00:00.515) 0:03:16.336 ********* 2026-03-03 01:03:27.168393 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168398 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168403 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168408 | orchestrator | 2026-03-03 01:03:27.168413 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.168418 | orchestrator | Tuesday 03 March 2026 00:56:40 +0000 (0:00:00.890) 0:03:17.227 ********* 2026-03-03 01:03:27.168424 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168429 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168435 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168440 | orchestrator | 2026-03-03 01:03:27.168446 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.168451 | orchestrator | Tuesday 03 March 2026 00:56:40 +0000 (0:00:00.272) 0:03:17.500 ********* 2026-03-03 01:03:27.168460 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168465 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168470 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168476 | orchestrator | 2026-03-03 01:03:27.168481 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.168487 | orchestrator | Tuesday 03 March 2026 00:56:40 +0000 (0:00:00.245) 0:03:17.746 ********* 2026-03-03 01:03:27.168492 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168497 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168502 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168508 | orchestrator | 2026-03-03 01:03:27.168513 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.168518 | orchestrator | Tuesday 03 March 2026 00:56:41 +0000 (0:00:00.329) 0:03:18.075 ********* 2026-03-03 01:03:27.168523 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168528 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168533 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168539 | orchestrator | 2026-03-03 01:03:27.168544 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.168549 | orchestrator | Tuesday 03 March 2026 00:56:41 +0000 (0:00:00.872) 0:03:18.947 ********* 2026-03-03 01:03:27.168554 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168559 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168564 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168569 | orchestrator | 2026-03-03 01:03:27.168574 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.168579 | orchestrator | Tuesday 03 March 2026 00:56:42 +0000 (0:00:00.460) 0:03:19.408 ********* 2026-03-03 01:03:27.168603 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168609 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168614 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168620 | orchestrator | 2026-03-03 01:03:27.168625 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.168630 | orchestrator | Tuesday 03 March 2026 00:56:42 +0000 (0:00:00.262) 0:03:19.670 ********* 2026-03-03 01:03:27.168635 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168640 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168645 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168651 | orchestrator | 2026-03-03 01:03:27.168656 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.168661 | orchestrator | Tuesday 03 March 2026 00:56:43 +0000 (0:00:00.762) 0:03:20.433 ********* 2026-03-03 01:03:27.168666 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168671 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168676 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168682 | orchestrator | 2026-03-03 01:03:27.168687 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.168692 | orchestrator | Tuesday 03 March 2026 00:56:44 +0000 (0:00:00.597) 0:03:21.030 ********* 2026-03-03 01:03:27.168697 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168703 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168708 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168713 | orchestrator | 2026-03-03 01:03:27.168719 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.168724 | orchestrator | Tuesday 03 March 2026 00:56:44 +0000 (0:00:00.513) 0:03:21.544 ********* 2026-03-03 01:03:27.168729 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168734 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168739 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168745 | orchestrator | 2026-03-03 01:03:27.168750 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.168755 | orchestrator | Tuesday 03 March 2026 00:56:44 +0000 (0:00:00.348) 0:03:21.893 ********* 2026-03-03 01:03:27.168761 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168770 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168775 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168780 | orchestrator | 2026-03-03 01:03:27.168785 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.168791 | orchestrator | Tuesday 03 March 2026 00:56:45 +0000 (0:00:00.349) 0:03:22.242 ********* 2026-03-03 01:03:27.168796 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168804 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168810 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168815 | orchestrator | 2026-03-03 01:03:27.168820 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.168825 | orchestrator | Tuesday 03 March 2026 00:56:45 +0000 (0:00:00.276) 0:03:22.518 ********* 2026-03-03 01:03:27.168831 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168836 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168841 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168847 | orchestrator | 2026-03-03 01:03:27.168852 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.168857 | orchestrator | Tuesday 03 March 2026 00:56:46 +0000 (0:00:00.588) 0:03:23.107 ********* 2026-03-03 01:03:27.168863 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168868 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168873 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168879 | orchestrator | 2026-03-03 01:03:27.168884 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.168889 | orchestrator | Tuesday 03 March 2026 00:56:46 +0000 (0:00:00.349) 0:03:23.456 ********* 2026-03-03 01:03:27.168894 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.168900 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.168905 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.168909 | orchestrator | 2026-03-03 01:03:27.168914 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.168919 | orchestrator | Tuesday 03 March 2026 00:56:46 +0000 (0:00:00.270) 0:03:23.727 ********* 2026-03-03 01:03:27.168924 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168929 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168934 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168938 | orchestrator | 2026-03-03 01:03:27.168943 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.168948 | orchestrator | Tuesday 03 March 2026 00:56:47 +0000 (0:00:00.359) 0:03:24.086 ********* 2026-03-03 01:03:27.168953 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168959 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.168964 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168970 | orchestrator | 2026-03-03 01:03:27.168975 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.168980 | orchestrator | Tuesday 03 March 2026 00:56:47 +0000 (0:00:00.548) 0:03:24.634 ********* 2026-03-03 01:03:27.168986 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.168991 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.168997 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169003 | orchestrator | 2026-03-03 01:03:27.169009 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-03 01:03:27.169014 | orchestrator | Tuesday 03 March 2026 00:56:48 +0000 (0:00:00.771) 0:03:25.406 ********* 2026-03-03 01:03:27.169020 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169024 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169029 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169034 | orchestrator | 2026-03-03 01:03:27.169038 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-03 01:03:27.169043 | orchestrator | Tuesday 03 March 2026 00:56:48 +0000 (0:00:00.334) 0:03:25.740 ********* 2026-03-03 01:03:27.169048 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.169057 | orchestrator | 2026-03-03 01:03:27.169062 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-03 01:03:27.169078 | orchestrator | Tuesday 03 March 2026 00:56:49 +0000 (0:00:00.702) 0:03:26.442 ********* 2026-03-03 01:03:27.169083 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.169088 | orchestrator | 2026-03-03 01:03:27.169115 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-03 01:03:27.169122 | orchestrator | Tuesday 03 March 2026 00:56:49 +0000 (0:00:00.134) 0:03:26.576 ********* 2026-03-03 01:03:27.169126 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-03 01:03:27.169131 | orchestrator | 2026-03-03 01:03:27.169135 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-03 01:03:27.169140 | orchestrator | Tuesday 03 March 2026 00:56:50 +0000 (0:00:00.959) 0:03:27.536 ********* 2026-03-03 01:03:27.169145 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169149 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169154 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169159 | orchestrator | 2026-03-03 01:03:27.169164 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-03 01:03:27.169169 | orchestrator | Tuesday 03 March 2026 00:56:50 +0000 (0:00:00.325) 0:03:27.861 ********* 2026-03-03 01:03:27.169173 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169178 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169183 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169188 | orchestrator | 2026-03-03 01:03:27.169192 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-03 01:03:27.169197 | orchestrator | Tuesday 03 March 2026 00:56:51 +0000 (0:00:00.517) 0:03:28.378 ********* 2026-03-03 01:03:27.169202 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169206 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169211 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169215 | orchestrator | 2026-03-03 01:03:27.169220 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-03 01:03:27.169225 | orchestrator | Tuesday 03 March 2026 00:56:52 +0000 (0:00:01.502) 0:03:29.881 ********* 2026-03-03 01:03:27.169230 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169234 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169239 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169243 | orchestrator | 2026-03-03 01:03:27.169248 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-03 01:03:27.169252 | orchestrator | Tuesday 03 March 2026 00:56:53 +0000 (0:00:00.805) 0:03:30.686 ********* 2026-03-03 01:03:27.169257 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169263 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169268 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169273 | orchestrator | 2026-03-03 01:03:27.169278 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-03 01:03:27.169288 | orchestrator | Tuesday 03 March 2026 00:56:54 +0000 (0:00:00.562) 0:03:31.249 ********* 2026-03-03 01:03:27.169293 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169299 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169304 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169309 | orchestrator | 2026-03-03 01:03:27.169314 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-03 01:03:27.169320 | orchestrator | Tuesday 03 March 2026 00:56:54 +0000 (0:00:00.682) 0:03:31.931 ********* 2026-03-03 01:03:27.169324 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169329 | orchestrator | 2026-03-03 01:03:27.169334 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-03 01:03:27.169339 | orchestrator | Tuesday 03 March 2026 00:56:55 +0000 (0:00:00.917) 0:03:32.849 ********* 2026-03-03 01:03:27.169343 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169348 | orchestrator | 2026-03-03 01:03:27.169353 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-03 01:03:27.169358 | orchestrator | Tuesday 03 March 2026 00:56:56 +0000 (0:00:01.079) 0:03:33.929 ********* 2026-03-03 01:03:27.169369 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:03:27.169374 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.169379 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.169385 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:03:27.169391 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-03 01:03:27.169397 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:03:27.169401 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:03:27.169407 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-03-03 01:03:27.169411 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:03:27.169416 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-03 01:03:27.169422 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-03 01:03:27.169426 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-03 01:03:27.169431 | orchestrator | 2026-03-03 01:03:27.169436 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-03 01:03:27.169441 | orchestrator | Tuesday 03 March 2026 00:57:00 +0000 (0:00:03.828) 0:03:37.758 ********* 2026-03-03 01:03:27.169446 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169451 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169456 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169461 | orchestrator | 2026-03-03 01:03:27.169467 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-03 01:03:27.169472 | orchestrator | Tuesday 03 March 2026 00:57:01 +0000 (0:00:01.015) 0:03:38.774 ********* 2026-03-03 01:03:27.169477 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169482 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169487 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169492 | orchestrator | 2026-03-03 01:03:27.169497 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-03 01:03:27.169502 | orchestrator | Tuesday 03 March 2026 00:57:02 +0000 (0:00:00.312) 0:03:39.086 ********* 2026-03-03 01:03:27.169507 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169512 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169517 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169522 | orchestrator | 2026-03-03 01:03:27.169527 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-03 01:03:27.169533 | orchestrator | Tuesday 03 March 2026 00:57:02 +0000 (0:00:00.521) 0:03:39.608 ********* 2026-03-03 01:03:27.169564 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169570 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169575 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169581 | orchestrator | 2026-03-03 01:03:27.169586 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-03 01:03:27.169591 | orchestrator | Tuesday 03 March 2026 00:57:04 +0000 (0:00:02.189) 0:03:41.798 ********* 2026-03-03 01:03:27.169596 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169601 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169606 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169611 | orchestrator | 2026-03-03 01:03:27.169616 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-03 01:03:27.169621 | orchestrator | Tuesday 03 March 2026 00:57:06 +0000 (0:00:01.426) 0:03:43.224 ********* 2026-03-03 01:03:27.169627 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.169632 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.169637 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.169642 | orchestrator | 2026-03-03 01:03:27.169647 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-03 01:03:27.169652 | orchestrator | Tuesday 03 March 2026 00:57:06 +0000 (0:00:00.273) 0:03:43.498 ********* 2026-03-03 01:03:27.169663 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.169668 | orchestrator | 2026-03-03 01:03:27.169673 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-03 01:03:27.169678 | orchestrator | Tuesday 03 March 2026 00:57:07 +0000 (0:00:00.756) 0:03:44.255 ********* 2026-03-03 01:03:27.169683 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.169688 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.169693 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.169698 | orchestrator | 2026-03-03 01:03:27.169703 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-03 01:03:27.169709 | orchestrator | Tuesday 03 March 2026 00:57:07 +0000 (0:00:00.310) 0:03:44.565 ********* 2026-03-03 01:03:27.169714 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.169719 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.169724 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.169729 | orchestrator | 2026-03-03 01:03:27.169734 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-03 01:03:27.169739 | orchestrator | Tuesday 03 March 2026 00:57:07 +0000 (0:00:00.305) 0:03:44.871 ********* 2026-03-03 01:03:27.169747 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.169752 | orchestrator | 2026-03-03 01:03:27.169758 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-03 01:03:27.169763 | orchestrator | Tuesday 03 March 2026 00:57:08 +0000 (0:00:00.830) 0:03:45.701 ********* 2026-03-03 01:03:27.169768 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169771 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169774 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169778 | orchestrator | 2026-03-03 01:03:27.169781 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-03 01:03:27.169784 | orchestrator | Tuesday 03 March 2026 00:57:10 +0000 (0:00:02.148) 0:03:47.849 ********* 2026-03-03 01:03:27.169789 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169794 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169799 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169804 | orchestrator | 2026-03-03 01:03:27.169809 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-03 01:03:27.169814 | orchestrator | Tuesday 03 March 2026 00:57:11 +0000 (0:00:01.077) 0:03:48.927 ********* 2026-03-03 01:03:27.169819 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169823 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169829 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169834 | orchestrator | 2026-03-03 01:03:27.169839 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-03 01:03:27.169844 | orchestrator | Tuesday 03 March 2026 00:57:13 +0000 (0:00:01.776) 0:03:50.704 ********* 2026-03-03 01:03:27.169849 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.169854 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.169859 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.169864 | orchestrator | 2026-03-03 01:03:27.169868 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-03 01:03:27.169873 | orchestrator | Tuesday 03 March 2026 00:57:15 +0000 (0:00:02.132) 0:03:52.836 ********* 2026-03-03 01:03:27.169878 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.169883 | orchestrator | 2026-03-03 01:03:27.169887 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-03 01:03:27.169892 | orchestrator | Tuesday 03 March 2026 00:57:16 +0000 (0:00:00.609) 0:03:53.446 ********* 2026-03-03 01:03:27.169897 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169901 | orchestrator | 2026-03-03 01:03:27.169906 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-03 01:03:27.169916 | orchestrator | Tuesday 03 March 2026 00:57:17 +0000 (0:00:01.095) 0:03:54.541 ********* 2026-03-03 01:03:27.169921 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.169925 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.169930 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.169935 | orchestrator | 2026-03-03 01:03:27.169940 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-03 01:03:27.169944 | orchestrator | Tuesday 03 March 2026 00:57:25 +0000 (0:00:08.425) 0:04:02.967 ********* 2026-03-03 01:03:27.169949 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.169954 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.169958 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.169963 | orchestrator | 2026-03-03 01:03:27.169968 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-03 01:03:27.169972 | orchestrator | Tuesday 03 March 2026 00:57:26 +0000 (0:00:00.588) 0:04:03.555 ********* 2026-03-03 01:03:27.170007 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-03 01:03:27.170043 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-03 01:03:27.170048 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-03 01:03:27.170052 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-03 01:03:27.170059 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-03 01:03:27.170063 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__032871f02131808b4171578208754bde6b340396'}])  2026-03-03 01:03:27.170083 | orchestrator | 2026-03-03 01:03:27.170088 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-03 01:03:27.170094 | orchestrator | Tuesday 03 March 2026 00:57:40 +0000 (0:00:14.442) 0:04:17.997 ********* 2026-03-03 01:03:27.170099 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170102 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170106 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170111 | orchestrator | 2026-03-03 01:03:27.170115 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-03 01:03:27.170129 | orchestrator | Tuesday 03 March 2026 00:57:41 +0000 (0:00:00.342) 0:04:18.340 ********* 2026-03-03 01:03:27.170136 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-03 01:03:27.170141 | orchestrator | 2026-03-03 01:03:27.170146 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-03 01:03:27.170151 | orchestrator | Tuesday 03 March 2026 00:57:42 +0000 (0:00:00.844) 0:04:19.185 ********* 2026-03-03 01:03:27.170155 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170160 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170165 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170170 | orchestrator | 2026-03-03 01:03:27.170175 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-03 01:03:27.170180 | orchestrator | Tuesday 03 March 2026 00:57:42 +0000 (0:00:00.392) 0:04:19.578 ********* 2026-03-03 01:03:27.170185 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170191 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170196 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170201 | orchestrator | 2026-03-03 01:03:27.170206 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-03 01:03:27.170211 | orchestrator | Tuesday 03 March 2026 00:57:42 +0000 (0:00:00.318) 0:04:19.896 ********* 2026-03-03 01:03:27.170216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-03 01:03:27.170222 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-03 01:03:27.170228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-03 01:03:27.170233 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170238 | orchestrator | 2026-03-03 01:03:27.170243 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-03 01:03:27.170248 | orchestrator | Tuesday 03 March 2026 00:57:43 +0000 (0:00:00.847) 0:04:20.744 ********* 2026-03-03 01:03:27.170253 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170259 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170263 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170269 | orchestrator | 2026-03-03 01:03:27.170274 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-03 01:03:27.170280 | orchestrator | 2026-03-03 01:03:27.170309 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.170316 | orchestrator | Tuesday 03 March 2026 00:57:44 +0000 (0:00:00.798) 0:04:21.543 ********* 2026-03-03 01:03:27.170322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-03 01:03:27.170326 | orchestrator | 2026-03-03 01:03:27.170329 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.170333 | orchestrator | Tuesday 03 March 2026 00:57:45 +0000 (0:00:00.675) 0:04:22.219 ********* 2026-03-03 01:03:27.170336 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-03 01:03:27.170339 | orchestrator | 2026-03-03 01:03:27.170342 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.170345 | orchestrator | Tuesday 03 March 2026 00:57:46 +0000 (0:00:01.028) 0:04:23.248 ********* 2026-03-03 01:03:27.170351 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170355 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170360 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170366 | orchestrator | 2026-03-03 01:03:27.170369 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.170372 | orchestrator | Tuesday 03 March 2026 00:57:47 +0000 (0:00:00.934) 0:04:24.182 ********* 2026-03-03 01:03:27.170375 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170379 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170382 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170385 | orchestrator | 2026-03-03 01:03:27.170388 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.170395 | orchestrator | Tuesday 03 March 2026 00:57:47 +0000 (0:00:00.409) 0:04:24.592 ********* 2026-03-03 01:03:27.170398 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170401 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170404 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170407 | orchestrator | 2026-03-03 01:03:27.170410 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.170413 | orchestrator | Tuesday 03 March 2026 00:57:48 +0000 (0:00:00.439) 0:04:25.031 ********* 2026-03-03 01:03:27.170417 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170420 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170426 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170429 | orchestrator | 2026-03-03 01:03:27.170432 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.170435 | orchestrator | Tuesday 03 March 2026 00:57:48 +0000 (0:00:00.327) 0:04:25.358 ********* 2026-03-03 01:03:27.170438 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170441 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170444 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170448 | orchestrator | 2026-03-03 01:03:27.170451 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.170454 | orchestrator | Tuesday 03 March 2026 00:57:49 +0000 (0:00:00.809) 0:04:26.168 ********* 2026-03-03 01:03:27.170459 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170464 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170470 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170475 | orchestrator | 2026-03-03 01:03:27.170481 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.170489 | orchestrator | Tuesday 03 March 2026 00:57:49 +0000 (0:00:00.261) 0:04:26.429 ********* 2026-03-03 01:03:27.170493 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170498 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170503 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170508 | orchestrator | 2026-03-03 01:03:27.170513 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.170518 | orchestrator | Tuesday 03 March 2026 00:57:49 +0000 (0:00:00.270) 0:04:26.700 ********* 2026-03-03 01:03:27.170524 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170529 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170534 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170539 | orchestrator | 2026-03-03 01:03:27.170544 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.170552 | orchestrator | Tuesday 03 March 2026 00:57:50 +0000 (0:00:01.054) 0:04:27.755 ********* 2026-03-03 01:03:27.170558 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170563 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170568 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170573 | orchestrator | 2026-03-03 01:03:27.170578 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.170583 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:00.797) 0:04:28.553 ********* 2026-03-03 01:03:27.170588 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170593 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170598 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170601 | orchestrator | 2026-03-03 01:03:27.170604 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.170607 | orchestrator | Tuesday 03 March 2026 00:57:51 +0000 (0:00:00.394) 0:04:28.947 ********* 2026-03-03 01:03:27.170610 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170613 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170616 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170619 | orchestrator | 2026-03-03 01:03:27.170623 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.170628 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:00.406) 0:04:29.354 ********* 2026-03-03 01:03:27.170637 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170643 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170648 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170653 | orchestrator | 2026-03-03 01:03:27.170658 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.170663 | orchestrator | Tuesday 03 March 2026 00:57:52 +0000 (0:00:00.475) 0:04:29.830 ********* 2026-03-03 01:03:27.170668 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170674 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170697 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170701 | orchestrator | 2026-03-03 01:03:27.170704 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.170708 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:00.296) 0:04:30.127 ********* 2026-03-03 01:03:27.170711 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170714 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170717 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170720 | orchestrator | 2026-03-03 01:03:27.170724 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.170730 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:00.254) 0:04:30.381 ********* 2026-03-03 01:03:27.170735 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170740 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170745 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170751 | orchestrator | 2026-03-03 01:03:27.170757 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.170762 | orchestrator | Tuesday 03 March 2026 00:57:53 +0000 (0:00:00.358) 0:04:30.740 ********* 2026-03-03 01:03:27.170767 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170773 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170777 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170782 | orchestrator | 2026-03-03 01:03:27.170787 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.170792 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.443) 0:04:31.183 ********* 2026-03-03 01:03:27.170797 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170803 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170808 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170813 | orchestrator | 2026-03-03 01:03:27.170816 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.170819 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.345) 0:04:31.529 ********* 2026-03-03 01:03:27.170822 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170825 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170829 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170832 | orchestrator | 2026-03-03 01:03:27.170835 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.170838 | orchestrator | Tuesday 03 March 2026 00:57:54 +0000 (0:00:00.335) 0:04:31.864 ********* 2026-03-03 01:03:27.170841 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.170844 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.170847 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.170850 | orchestrator | 2026-03-03 01:03:27.170861 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-03 01:03:27.170866 | orchestrator | Tuesday 03 March 2026 00:57:55 +0000 (0:00:00.646) 0:04:32.511 ********* 2026-03-03 01:03:27.170872 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-03 01:03:27.170877 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:03:27.170885 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:03:27.170892 | orchestrator | 2026-03-03 01:03:27.170897 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-03 01:03:27.170908 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.597) 0:04:33.108 ********* 2026-03-03 01:03:27.170913 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.170918 | orchestrator | 2026-03-03 01:03:27.170922 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-03 01:03:27.170925 | orchestrator | Tuesday 03 March 2026 00:57:56 +0000 (0:00:00.473) 0:04:33.581 ********* 2026-03-03 01:03:27.170928 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.170931 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.170934 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.170937 | orchestrator | 2026-03-03 01:03:27.170940 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-03 01:03:27.170944 | orchestrator | Tuesday 03 March 2026 00:57:57 +0000 (0:00:00.587) 0:04:34.169 ********* 2026-03-03 01:03:27.170947 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.170950 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.170953 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.170956 | orchestrator | 2026-03-03 01:03:27.170959 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-03 01:03:27.170962 | orchestrator | Tuesday 03 March 2026 00:57:57 +0000 (0:00:00.409) 0:04:34.579 ********* 2026-03-03 01:03:27.170965 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:03:27.170969 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:03:27.170974 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:03:27.170979 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-03 01:03:27.170984 | orchestrator | 2026-03-03 01:03:27.170989 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-03 01:03:27.170995 | orchestrator | Tuesday 03 March 2026 00:58:06 +0000 (0:00:09.358) 0:04:43.938 ********* 2026-03-03 01:03:27.170999 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.171002 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.171005 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.171008 | orchestrator | 2026-03-03 01:03:27.171011 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-03 01:03:27.171014 | orchestrator | Tuesday 03 March 2026 00:58:07 +0000 (0:00:00.312) 0:04:44.250 ********* 2026-03-03 01:03:27.171017 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-03 01:03:27.171020 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-03 01:03:27.171023 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-03 01:03:27.171027 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-03 01:03:27.171030 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.171033 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.171037 | orchestrator | 2026-03-03 01:03:27.171060 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-03 01:03:27.171094 | orchestrator | Tuesday 03 March 2026 00:58:09 +0000 (0:00:02.369) 0:04:46.619 ********* 2026-03-03 01:03:27.171098 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-03 01:03:27.171101 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-03 01:03:27.171104 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-03 01:03:27.171108 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:03:27.171113 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-03 01:03:27.171119 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-03 01:03:27.171127 | orchestrator | 2026-03-03 01:03:27.171132 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-03 01:03:27.171137 | orchestrator | Tuesday 03 March 2026 00:58:11 +0000 (0:00:01.674) 0:04:48.294 ********* 2026-03-03 01:03:27.171142 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.171146 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.171156 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.171161 | orchestrator | 2026-03-03 01:03:27.171166 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-03 01:03:27.171171 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:01.157) 0:04:49.451 ********* 2026-03-03 01:03:27.171174 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.171177 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.171181 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.171184 | orchestrator | 2026-03-03 01:03:27.171187 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-03 01:03:27.171190 | orchestrator | Tuesday 03 March 2026 00:58:12 +0000 (0:00:00.272) 0:04:49.724 ********* 2026-03-03 01:03:27.171193 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.171196 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.171199 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.171202 | orchestrator | 2026-03-03 01:03:27.171205 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-03 01:03:27.171208 | orchestrator | Tuesday 03 March 2026 00:58:13 +0000 (0:00:00.298) 0:04:50.023 ********* 2026-03-03 01:03:27.171212 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.171215 | orchestrator | 2026-03-03 01:03:27.171218 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-03 01:03:27.171225 | orchestrator | Tuesday 03 March 2026 00:58:13 +0000 (0:00:00.532) 0:04:50.556 ********* 2026-03-03 01:03:27.171228 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.171231 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.171234 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.171237 | orchestrator | 2026-03-03 01:03:27.171241 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-03 01:03:27.171244 | orchestrator | Tuesday 03 March 2026 00:58:13 +0000 (0:00:00.289) 0:04:50.845 ********* 2026-03-03 01:03:27.171247 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.171250 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.171253 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.171256 | orchestrator | 2026-03-03 01:03:27.171259 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-03 01:03:27.171262 | orchestrator | Tuesday 03 March 2026 00:58:14 +0000 (0:00:00.316) 0:04:51.162 ********* 2026-03-03 01:03:27.171265 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.171269 | orchestrator | 2026-03-03 01:03:27.171272 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-03 01:03:27.171275 | orchestrator | Tuesday 03 March 2026 00:58:14 +0000 (0:00:00.427) 0:04:51.589 ********* 2026-03-03 01:03:27.171278 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.171281 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.171284 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.171287 | orchestrator | 2026-03-03 01:03:27.171290 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-03 01:03:27.171293 | orchestrator | Tuesday 03 March 2026 00:58:16 +0000 (0:00:01.706) 0:04:53.296 ********* 2026-03-03 01:03:27.171297 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.171300 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.171303 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.171306 | orchestrator | 2026-03-03 01:03:27.171309 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-03 01:03:27.171312 | orchestrator | Tuesday 03 March 2026 00:58:17 +0000 (0:00:01.198) 0:04:54.495 ********* 2026-03-03 01:03:27.171315 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.171318 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.171321 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.171324 | orchestrator | 2026-03-03 01:03:27.171327 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-03 01:03:27.171333 | orchestrator | Tuesday 03 March 2026 00:58:19 +0000 (0:00:01.792) 0:04:56.287 ********* 2026-03-03 01:03:27.171337 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.171340 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.171343 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.171346 | orchestrator | 2026-03-03 01:03:27.171349 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-03 01:03:27.171352 | orchestrator | Tuesday 03 March 2026 00:58:21 +0000 (0:00:02.259) 0:04:58.546 ********* 2026-03-03 01:03:27.171355 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.171358 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.171361 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-03 01:03:27.171364 | orchestrator | 2026-03-03 01:03:27.171369 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-03 01:03:27.171374 | orchestrator | Tuesday 03 March 2026 00:58:22 +0000 (0:00:00.510) 0:04:59.057 ********* 2026-03-03 01:03:27.171379 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-03 01:03:27.171402 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-03 01:03:27.171408 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-03 01:03:27.171413 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-03 01:03:27.171418 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-03 01:03:27.171424 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-03 01:03:27.171429 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.171434 | orchestrator | 2026-03-03 01:03:27.171439 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-03 01:03:27.171445 | orchestrator | Tuesday 03 March 2026 00:58:57 +0000 (0:00:35.751) 0:05:34.809 ********* 2026-03-03 01:03:27.171450 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.171455 | orchestrator | 2026-03-03 01:03:27.171460 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-03 01:03:27.171465 | orchestrator | Tuesday 03 March 2026 00:58:59 +0000 (0:00:01.299) 0:05:36.108 ********* 2026-03-03 01:03:27.171470 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.171475 | orchestrator | 2026-03-03 01:03:27.171480 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-03 01:03:27.171486 | orchestrator | Tuesday 03 March 2026 00:58:59 +0000 (0:00:00.282) 0:05:36.391 ********* 2026-03-03 01:03:27.171491 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.171496 | orchestrator | 2026-03-03 01:03:27.171501 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-03 01:03:27.171506 | orchestrator | Tuesday 03 March 2026 00:58:59 +0000 (0:00:00.125) 0:05:36.516 ********* 2026-03-03 01:03:27.171511 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-03 01:03:27.171516 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-03 01:03:27.171522 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-03 01:03:27.171527 | orchestrator | 2026-03-03 01:03:27.171535 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-03 01:03:27.171540 | orchestrator | Tuesday 03 March 2026 00:59:05 +0000 (0:00:06.339) 0:05:42.856 ********* 2026-03-03 01:03:27.171545 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-03 01:03:27.171550 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-03 01:03:27.171556 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-03 01:03:27.171565 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-03 01:03:27.171570 | orchestrator | 2026-03-03 01:03:27.171575 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-03 01:03:27.171580 | orchestrator | Tuesday 03 March 2026 00:59:10 +0000 (0:00:04.878) 0:05:47.734 ********* 2026-03-03 01:03:27.171586 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.171591 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.171596 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.171601 | orchestrator | 2026-03-03 01:03:27.171607 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-03 01:03:27.171612 | orchestrator | Tuesday 03 March 2026 00:59:11 +0000 (0:00:00.687) 0:05:48.422 ********* 2026-03-03 01:03:27.171615 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.171618 | orchestrator | 2026-03-03 01:03:27.171621 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-03 01:03:27.171624 | orchestrator | Tuesday 03 March 2026 00:59:11 +0000 (0:00:00.472) 0:05:48.894 ********* 2026-03-03 01:03:27.171627 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.171631 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.171634 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.171637 | orchestrator | 2026-03-03 01:03:27.171640 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-03 01:03:27.171643 | orchestrator | Tuesday 03 March 2026 00:59:12 +0000 (0:00:00.425) 0:05:49.320 ********* 2026-03-03 01:03:27.171646 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.171649 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.171652 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.171655 | orchestrator | 2026-03-03 01:03:27.171658 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-03 01:03:27.171661 | orchestrator | Tuesday 03 March 2026 00:59:13 +0000 (0:00:01.239) 0:05:50.559 ********* 2026-03-03 01:03:27.171665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-03 01:03:27.171668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-03 01:03:27.171671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-03 01:03:27.171674 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.171677 | orchestrator | 2026-03-03 01:03:27.171680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-03 01:03:27.171683 | orchestrator | Tuesday 03 March 2026 00:59:14 +0000 (0:00:00.559) 0:05:51.118 ********* 2026-03-03 01:03:27.171686 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.171689 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.171692 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.171695 | orchestrator | 2026-03-03 01:03:27.171699 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-03 01:03:27.171702 | orchestrator | 2026-03-03 01:03:27.171705 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.171708 | orchestrator | Tuesday 03 March 2026 00:59:14 +0000 (0:00:00.464) 0:05:51.582 ********* 2026-03-03 01:03:27.171726 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.171732 | orchestrator | 2026-03-03 01:03:27.171737 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.171742 | orchestrator | Tuesday 03 March 2026 00:59:15 +0000 (0:00:00.607) 0:05:52.190 ********* 2026-03-03 01:03:27.171747 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.171752 | orchestrator | 2026-03-03 01:03:27.171757 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.171762 | orchestrator | Tuesday 03 March 2026 00:59:15 +0000 (0:00:00.487) 0:05:52.677 ********* 2026-03-03 01:03:27.171771 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.171777 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.171780 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.171783 | orchestrator | 2026-03-03 01:03:27.171786 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.171789 | orchestrator | Tuesday 03 March 2026 00:59:16 +0000 (0:00:00.404) 0:05:53.082 ********* 2026-03-03 01:03:27.171792 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.171795 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.171799 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.171802 | orchestrator | 2026-03-03 01:03:27.171805 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.171808 | orchestrator | Tuesday 03 March 2026 00:59:16 +0000 (0:00:00.668) 0:05:53.751 ********* 2026-03-03 01:03:27.171811 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.171814 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.171818 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.171823 | orchestrator | 2026-03-03 01:03:27.171828 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.171833 | orchestrator | Tuesday 03 March 2026 00:59:17 +0000 (0:00:00.793) 0:05:54.545 ********* 2026-03-03 01:03:27.171840 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.171843 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.171846 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.171849 | orchestrator | 2026-03-03 01:03:27.171852 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.171856 | orchestrator | Tuesday 03 March 2026 00:59:18 +0000 (0:00:00.697) 0:05:55.242 ********* 2026-03-03 01:03:27.171859 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.171865 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.171869 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.171874 | orchestrator | 2026-03-03 01:03:27.171881 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.171888 | orchestrator | Tuesday 03 March 2026 00:59:18 +0000 (0:00:00.411) 0:05:55.653 ********* 2026-03-03 01:03:27.171893 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.171898 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.171903 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.171909 | orchestrator | 2026-03-03 01:03:27.171912 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.171915 | orchestrator | Tuesday 03 March 2026 00:59:18 +0000 (0:00:00.247) 0:05:55.901 ********* 2026-03-03 01:03:27.171918 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.171921 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.171924 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.171928 | orchestrator | 2026-03-03 01:03:27.171931 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.171934 | orchestrator | Tuesday 03 March 2026 00:59:19 +0000 (0:00:00.266) 0:05:56.167 ********* 2026-03-03 01:03:27.171937 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.171940 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.171943 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.171946 | orchestrator | 2026-03-03 01:03:27.171949 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.171953 | orchestrator | Tuesday 03 March 2026 00:59:19 +0000 (0:00:00.695) 0:05:56.863 ********* 2026-03-03 01:03:27.171956 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.171959 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.171962 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.171965 | orchestrator | 2026-03-03 01:03:27.171968 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.171971 | orchestrator | Tuesday 03 March 2026 00:59:20 +0000 (0:00:00.841) 0:05:57.704 ********* 2026-03-03 01:03:27.171974 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.171977 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.171984 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.171987 | orchestrator | 2026-03-03 01:03:27.171990 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.171993 | orchestrator | Tuesday 03 March 2026 00:59:20 +0000 (0:00:00.279) 0:05:57.984 ********* 2026-03-03 01:03:27.171996 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.171999 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172002 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172006 | orchestrator | 2026-03-03 01:03:27.172009 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.172012 | orchestrator | Tuesday 03 March 2026 00:59:21 +0000 (0:00:00.252) 0:05:58.237 ********* 2026-03-03 01:03:27.172015 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172018 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172021 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172024 | orchestrator | 2026-03-03 01:03:27.172028 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.172031 | orchestrator | Tuesday 03 March 2026 00:59:21 +0000 (0:00:00.282) 0:05:58.519 ********* 2026-03-03 01:03:27.172034 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172037 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172040 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172043 | orchestrator | 2026-03-03 01:03:27.172046 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.172049 | orchestrator | Tuesday 03 March 2026 00:59:21 +0000 (0:00:00.459) 0:05:58.979 ********* 2026-03-03 01:03:27.172052 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172056 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172061 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172076 | orchestrator | 2026-03-03 01:03:27.172080 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.172083 | orchestrator | Tuesday 03 March 2026 00:59:22 +0000 (0:00:00.315) 0:05:59.295 ********* 2026-03-03 01:03:27.172086 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172089 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172092 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172095 | orchestrator | 2026-03-03 01:03:27.172098 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.172102 | orchestrator | Tuesday 03 March 2026 00:59:22 +0000 (0:00:00.251) 0:05:59.546 ********* 2026-03-03 01:03:27.172105 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172108 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172111 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172114 | orchestrator | 2026-03-03 01:03:27.172117 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.172120 | orchestrator | Tuesday 03 March 2026 00:59:22 +0000 (0:00:00.270) 0:05:59.816 ********* 2026-03-03 01:03:27.172123 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172126 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172130 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172133 | orchestrator | 2026-03-03 01:03:27.172136 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.172139 | orchestrator | Tuesday 03 March 2026 00:59:23 +0000 (0:00:00.516) 0:06:00.333 ********* 2026-03-03 01:03:27.172142 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172145 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172148 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172151 | orchestrator | 2026-03-03 01:03:27.172154 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.172158 | orchestrator | Tuesday 03 March 2026 00:59:23 +0000 (0:00:00.329) 0:06:00.663 ********* 2026-03-03 01:03:27.172161 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172164 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172167 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172170 | orchestrator | 2026-03-03 01:03:27.172176 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-03 01:03:27.172179 | orchestrator | Tuesday 03 March 2026 00:59:24 +0000 (0:00:00.488) 0:06:01.151 ********* 2026-03-03 01:03:27.172182 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172185 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172188 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172192 | orchestrator | 2026-03-03 01:03:27.172197 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-03 01:03:27.172200 | orchestrator | Tuesday 03 March 2026 00:59:24 +0000 (0:00:00.442) 0:06:01.594 ********* 2026-03-03 01:03:27.172203 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:03:27.172207 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:03:27.172210 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:03:27.172213 | orchestrator | 2026-03-03 01:03:27.172216 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-03 01:03:27.172219 | orchestrator | Tuesday 03 March 2026 00:59:25 +0000 (0:00:00.568) 0:06:02.163 ********* 2026-03-03 01:03:27.172222 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.172225 | orchestrator | 2026-03-03 01:03:27.172228 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-03 01:03:27.172232 | orchestrator | Tuesday 03 March 2026 00:59:25 +0000 (0:00:00.474) 0:06:02.637 ********* 2026-03-03 01:03:27.172235 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172238 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172241 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172244 | orchestrator | 2026-03-03 01:03:27.172247 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-03 01:03:27.172250 | orchestrator | Tuesday 03 March 2026 00:59:26 +0000 (0:00:00.398) 0:06:03.035 ********* 2026-03-03 01:03:27.172253 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172256 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172259 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172262 | orchestrator | 2026-03-03 01:03:27.172266 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-03 01:03:27.172269 | orchestrator | Tuesday 03 March 2026 00:59:26 +0000 (0:00:00.253) 0:06:03.288 ********* 2026-03-03 01:03:27.172272 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172275 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172278 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172281 | orchestrator | 2026-03-03 01:03:27.172284 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-03 01:03:27.172287 | orchestrator | Tuesday 03 March 2026 00:59:26 +0000 (0:00:00.664) 0:06:03.953 ********* 2026-03-03 01:03:27.172290 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172293 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172296 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172299 | orchestrator | 2026-03-03 01:03:27.172303 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-03 01:03:27.172306 | orchestrator | Tuesday 03 March 2026 00:59:27 +0000 (0:00:00.289) 0:06:04.242 ********* 2026-03-03 01:03:27.172309 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-03 01:03:27.172312 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-03 01:03:27.172315 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-03 01:03:27.172318 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-03 01:03:27.172321 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-03 01:03:27.172331 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-03 01:03:27.172335 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-03 01:03:27.172338 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-03 01:03:27.172341 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-03 01:03:27.172344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-03 01:03:27.172347 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-03 01:03:27.172350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-03 01:03:27.172353 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-03 01:03:27.172356 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-03 01:03:27.172359 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-03 01:03:27.172362 | orchestrator | 2026-03-03 01:03:27.172365 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-03 01:03:27.172369 | orchestrator | Tuesday 03 March 2026 00:59:33 +0000 (0:00:06.417) 0:06:10.659 ********* 2026-03-03 01:03:27.172372 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172375 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172378 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172381 | orchestrator | 2026-03-03 01:03:27.172384 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-03 01:03:27.172387 | orchestrator | Tuesday 03 March 2026 00:59:33 +0000 (0:00:00.267) 0:06:10.927 ********* 2026-03-03 01:03:27.172390 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.172393 | orchestrator | 2026-03-03 01:03:27.172396 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-03 01:03:27.172400 | orchestrator | Tuesday 03 March 2026 00:59:34 +0000 (0:00:00.452) 0:06:11.380 ********* 2026-03-03 01:03:27.172405 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-03 01:03:27.172408 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-03 01:03:27.172411 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-03 01:03:27.172414 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-03 01:03:27.172417 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-03 01:03:27.172420 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-03 01:03:27.172423 | orchestrator | 2026-03-03 01:03:27.172426 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-03 01:03:27.172430 | orchestrator | Tuesday 03 March 2026 00:59:35 +0000 (0:00:01.234) 0:06:12.615 ********* 2026-03-03 01:03:27.172433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.172436 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-03 01:03:27.172439 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:03:27.172442 | orchestrator | 2026-03-03 01:03:27.172445 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-03 01:03:27.172448 | orchestrator | Tuesday 03 March 2026 00:59:37 +0000 (0:00:02.161) 0:06:14.776 ********* 2026-03-03 01:03:27.172451 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 01:03:27.172454 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-03 01:03:27.172457 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.172461 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 01:03:27.172464 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-03 01:03:27.172467 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.172472 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 01:03:27.172475 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-03 01:03:27.172478 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.172481 | orchestrator | 2026-03-03 01:03:27.172485 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-03 01:03:27.172488 | orchestrator | Tuesday 03 March 2026 00:59:39 +0000 (0:00:01.236) 0:06:16.013 ********* 2026-03-03 01:03:27.172491 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.172494 | orchestrator | 2026-03-03 01:03:27.172497 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-03 01:03:27.172500 | orchestrator | Tuesday 03 March 2026 00:59:41 +0000 (0:00:02.239) 0:06:18.252 ********* 2026-03-03 01:03:27.172505 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.172510 | orchestrator | 2026-03-03 01:03:27.172515 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-03 01:03:27.172519 | orchestrator | Tuesday 03 March 2026 00:59:41 +0000 (0:00:00.522) 0:06:18.775 ********* 2026-03-03 01:03:27.172527 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7865f1e-8b85-57a7-a15d-91986b577cab', 'data_vg': 'ceph-f7865f1e-8b85-57a7-a15d-91986b577cab'}) 2026-03-03 01:03:27.172534 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-896495c2-660d-5a75-b418-75215a0ec973', 'data_vg': 'ceph-896495c2-660d-5a75-b418-75215a0ec973'}) 2026-03-03 01:03:27.172542 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd', 'data_vg': 'ceph-a3b27c0a-2179-5024-9c6e-3cd3ebbe6cfd'}) 2026-03-03 01:03:27.172548 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-60a17889-adeb-5df5-a11b-dee290996ccf', 'data_vg': 'ceph-60a17889-adeb-5df5-a11b-dee290996ccf'}) 2026-03-03 01:03:27.172552 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d486d743-7c4f-58d7-8950-e96875d5f319', 'data_vg': 'ceph-d486d743-7c4f-58d7-8950-e96875d5f319'}) 2026-03-03 01:03:27.172557 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b901fd44-5489-5e25-a5fe-b820905f87a1', 'data_vg': 'ceph-b901fd44-5489-5e25-a5fe-b820905f87a1'}) 2026-03-03 01:03:27.172562 | orchestrator | 2026-03-03 01:03:27.172568 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-03 01:03:27.172573 | orchestrator | Tuesday 03 March 2026 01:00:17 +0000 (0:00:35.854) 0:06:54.629 ********* 2026-03-03 01:03:27.172578 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172583 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172588 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172593 | orchestrator | 2026-03-03 01:03:27.172598 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-03 01:03:27.172603 | orchestrator | Tuesday 03 March 2026 01:00:17 +0000 (0:00:00.251) 0:06:54.881 ********* 2026-03-03 01:03:27.172608 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.172612 | orchestrator | 2026-03-03 01:03:27.172616 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-03 01:03:27.172621 | orchestrator | Tuesday 03 March 2026 01:00:18 +0000 (0:00:00.488) 0:06:55.370 ********* 2026-03-03 01:03:27.172626 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172630 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172635 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172641 | orchestrator | 2026-03-03 01:03:27.172646 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-03 01:03:27.172651 | orchestrator | Tuesday 03 March 2026 01:00:19 +0000 (0:00:00.961) 0:06:56.332 ********* 2026-03-03 01:03:27.172656 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.172662 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.172667 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.172679 | orchestrator | 2026-03-03 01:03:27.172687 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-03 01:03:27.172692 | orchestrator | Tuesday 03 March 2026 01:00:21 +0000 (0:00:02.368) 0:06:58.700 ********* 2026-03-03 01:03:27.172697 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.172702 | orchestrator | 2026-03-03 01:03:27.172708 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-03 01:03:27.172713 | orchestrator | Tuesday 03 March 2026 01:00:22 +0000 (0:00:00.486) 0:06:59.187 ********* 2026-03-03 01:03:27.172718 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.172723 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.172729 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.172734 | orchestrator | 2026-03-03 01:03:27.172739 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-03 01:03:27.172744 | orchestrator | Tuesday 03 March 2026 01:00:23 +0000 (0:00:01.451) 0:07:00.638 ********* 2026-03-03 01:03:27.172749 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.172755 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.172760 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.172765 | orchestrator | 2026-03-03 01:03:27.172770 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-03 01:03:27.172776 | orchestrator | Tuesday 03 March 2026 01:00:24 +0000 (0:00:01.312) 0:07:01.950 ********* 2026-03-03 01:03:27.172781 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.172786 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.172791 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.172796 | orchestrator | 2026-03-03 01:03:27.172801 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-03 01:03:27.172806 | orchestrator | Tuesday 03 March 2026 01:00:27 +0000 (0:00:02.254) 0:07:04.204 ********* 2026-03-03 01:03:27.172811 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172816 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172821 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172826 | orchestrator | 2026-03-03 01:03:27.172831 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-03 01:03:27.172836 | orchestrator | Tuesday 03 March 2026 01:00:27 +0000 (0:00:00.345) 0:07:04.550 ********* 2026-03-03 01:03:27.172842 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.172847 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.172852 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.172857 | orchestrator | 2026-03-03 01:03:27.172862 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-03 01:03:27.172867 | orchestrator | Tuesday 03 March 2026 01:00:28 +0000 (0:00:00.634) 0:07:05.185 ********* 2026-03-03 01:03:27.172872 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-03 01:03:27.172877 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-03 01:03:27.172882 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-03 01:03:27.172887 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-03-03 01:03:27.172892 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-03 01:03:27.172897 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-03 01:03:27.172902 | orchestrator | 2026-03-03 01:03:27.172908 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-03 01:03:27.172913 | orchestrator | Tuesday 03 March 2026 01:00:29 +0000 (0:00:00.952) 0:07:06.137 ********* 2026-03-03 01:03:27.172918 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-03 01:03:27.172923 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-03 01:03:27.172928 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-03 01:03:27.172933 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-03 01:03:27.172938 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-03 01:03:27.172947 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-03 01:03:27.172953 | orchestrator | 2026-03-03 01:03:27.172963 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-03 01:03:27.172969 | orchestrator | Tuesday 03 March 2026 01:00:31 +0000 (0:00:02.265) 0:07:08.403 ********* 2026-03-03 01:03:27.172974 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-03 01:03:27.172979 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-03 01:03:27.172984 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-03 01:03:27.172989 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-03 01:03:27.172994 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-03 01:03:27.173000 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-03 01:03:27.173005 | orchestrator | 2026-03-03 01:03:27.173011 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-03 01:03:27.173016 | orchestrator | Tuesday 03 March 2026 01:00:35 +0000 (0:00:04.194) 0:07:12.597 ********* 2026-03-03 01:03:27.173021 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173026 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173031 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.173036 | orchestrator | 2026-03-03 01:03:27.173041 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-03 01:03:27.173047 | orchestrator | Tuesday 03 March 2026 01:00:37 +0000 (0:00:02.249) 0:07:14.847 ********* 2026-03-03 01:03:27.173052 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173057 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173062 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-03 01:03:27.173097 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.173103 | orchestrator | 2026-03-03 01:03:27.173108 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-03 01:03:27.173113 | orchestrator | Tuesday 03 March 2026 01:00:50 +0000 (0:00:12.582) 0:07:27.429 ********* 2026-03-03 01:03:27.173118 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173123 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173128 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173133 | orchestrator | 2026-03-03 01:03:27.173138 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-03 01:03:27.173147 | orchestrator | Tuesday 03 March 2026 01:00:51 +0000 (0:00:01.091) 0:07:28.521 ********* 2026-03-03 01:03:27.173152 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173157 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173162 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173167 | orchestrator | 2026-03-03 01:03:27.173172 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-03 01:03:27.173178 | orchestrator | Tuesday 03 March 2026 01:00:51 +0000 (0:00:00.344) 0:07:28.865 ********* 2026-03-03 01:03:27.173184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.173189 | orchestrator | 2026-03-03 01:03:27.173194 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-03 01:03:27.173199 | orchestrator | Tuesday 03 March 2026 01:00:52 +0000 (0:00:00.524) 0:07:29.390 ********* 2026-03-03 01:03:27.173204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.173209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.173215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.173220 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173225 | orchestrator | 2026-03-03 01:03:27.173230 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-03 01:03:27.173235 | orchestrator | Tuesday 03 March 2026 01:00:53 +0000 (0:00:00.624) 0:07:30.014 ********* 2026-03-03 01:03:27.173240 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173245 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173250 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173259 | orchestrator | 2026-03-03 01:03:27.173264 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-03 01:03:27.173270 | orchestrator | Tuesday 03 March 2026 01:00:53 +0000 (0:00:00.554) 0:07:30.569 ********* 2026-03-03 01:03:27.173274 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173279 | orchestrator | 2026-03-03 01:03:27.173284 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-03 01:03:27.173289 | orchestrator | Tuesday 03 March 2026 01:00:53 +0000 (0:00:00.215) 0:07:30.785 ********* 2026-03-03 01:03:27.173294 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173299 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173305 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173310 | orchestrator | 2026-03-03 01:03:27.173315 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-03 01:03:27.173320 | orchestrator | Tuesday 03 March 2026 01:00:54 +0000 (0:00:00.371) 0:07:31.156 ********* 2026-03-03 01:03:27.173325 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173330 | orchestrator | 2026-03-03 01:03:27.173336 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-03 01:03:27.173341 | orchestrator | Tuesday 03 March 2026 01:00:54 +0000 (0:00:00.200) 0:07:31.357 ********* 2026-03-03 01:03:27.173346 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173352 | orchestrator | 2026-03-03 01:03:27.173357 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-03 01:03:27.173362 | orchestrator | Tuesday 03 March 2026 01:00:54 +0000 (0:00:00.202) 0:07:31.560 ********* 2026-03-03 01:03:27.173367 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173372 | orchestrator | 2026-03-03 01:03:27.173377 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-03 01:03:27.173382 | orchestrator | Tuesday 03 March 2026 01:00:54 +0000 (0:00:00.124) 0:07:31.684 ********* 2026-03-03 01:03:27.173387 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173392 | orchestrator | 2026-03-03 01:03:27.173400 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-03 01:03:27.173406 | orchestrator | Tuesday 03 March 2026 01:00:54 +0000 (0:00:00.218) 0:07:31.902 ********* 2026-03-03 01:03:27.173410 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173415 | orchestrator | 2026-03-03 01:03:27.173420 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-03 01:03:27.173425 | orchestrator | Tuesday 03 March 2026 01:00:55 +0000 (0:00:00.222) 0:07:32.124 ********* 2026-03-03 01:03:27.173430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.173435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.173439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.173445 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173450 | orchestrator | 2026-03-03 01:03:27.173455 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-03 01:03:27.173460 | orchestrator | Tuesday 03 March 2026 01:00:56 +0000 (0:00:00.901) 0:07:33.025 ********* 2026-03-03 01:03:27.173465 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173471 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173476 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173480 | orchestrator | 2026-03-03 01:03:27.173486 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-03 01:03:27.173490 | orchestrator | Tuesday 03 March 2026 01:00:56 +0000 (0:00:00.296) 0:07:33.322 ********* 2026-03-03 01:03:27.173495 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173500 | orchestrator | 2026-03-03 01:03:27.173504 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-03 01:03:27.173508 | orchestrator | Tuesday 03 March 2026 01:00:56 +0000 (0:00:00.202) 0:07:33.525 ********* 2026-03-03 01:03:27.173514 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173519 | orchestrator | 2026-03-03 01:03:27.173534 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-03 01:03:27.173540 | orchestrator | 2026-03-03 01:03:27.173545 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.173550 | orchestrator | Tuesday 03 March 2026 01:00:57 +0000 (0:00:00.638) 0:07:34.163 ********* 2026-03-03 01:03:27.173556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.173562 | orchestrator | 2026-03-03 01:03:27.173570 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.173574 | orchestrator | Tuesday 03 March 2026 01:00:58 +0000 (0:00:01.173) 0:07:35.337 ********* 2026-03-03 01:03:27.173579 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.173583 | orchestrator | 2026-03-03 01:03:27.173587 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.173591 | orchestrator | Tuesday 03 March 2026 01:00:59 +0000 (0:00:01.212) 0:07:36.549 ********* 2026-03-03 01:03:27.173596 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173601 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173606 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173610 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.173615 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.173619 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.173624 | orchestrator | 2026-03-03 01:03:27.173628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.173633 | orchestrator | Tuesday 03 March 2026 01:01:00 +0000 (0:00:01.121) 0:07:37.671 ********* 2026-03-03 01:03:27.173637 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.173642 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.173647 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.173652 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.173657 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.173662 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.173666 | orchestrator | 2026-03-03 01:03:27.173671 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.173675 | orchestrator | Tuesday 03 March 2026 01:01:01 +0000 (0:00:00.681) 0:07:38.352 ********* 2026-03-03 01:03:27.173680 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.173684 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.173689 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.173693 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.173698 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.173702 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.173707 | orchestrator | 2026-03-03 01:03:27.173712 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.173717 | orchestrator | Tuesday 03 March 2026 01:01:02 +0000 (0:00:00.973) 0:07:39.326 ********* 2026-03-03 01:03:27.173721 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.173726 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.173731 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.173735 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.173740 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.173745 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.173750 | orchestrator | 2026-03-03 01:03:27.173754 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.173759 | orchestrator | Tuesday 03 March 2026 01:01:03 +0000 (0:00:00.689) 0:07:40.015 ********* 2026-03-03 01:03:27.173763 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173768 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173773 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173777 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.173782 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.173791 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.173797 | orchestrator | 2026-03-03 01:03:27.173802 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.173807 | orchestrator | Tuesday 03 March 2026 01:01:04 +0000 (0:00:01.265) 0:07:41.280 ********* 2026-03-03 01:03:27.173812 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173818 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173828 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173834 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.173839 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.173844 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.173849 | orchestrator | 2026-03-03 01:03:27.173855 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.173860 | orchestrator | Tuesday 03 March 2026 01:01:04 +0000 (0:00:00.583) 0:07:41.864 ********* 2026-03-03 01:03:27.173865 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173870 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173875 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173880 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.173885 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.173890 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.173895 | orchestrator | 2026-03-03 01:03:27.173900 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.173906 | orchestrator | Tuesday 03 March 2026 01:01:05 +0000 (0:00:00.803) 0:07:42.668 ********* 2026-03-03 01:03:27.173909 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.173912 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.173915 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.173918 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.173921 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.173924 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.173927 | orchestrator | 2026-03-03 01:03:27.173930 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.173934 | orchestrator | Tuesday 03 March 2026 01:01:06 +0000 (0:00:01.042) 0:07:43.711 ********* 2026-03-03 01:03:27.173937 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.173940 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.173943 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.173946 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.173949 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.173952 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.173963 | orchestrator | 2026-03-03 01:03:27.173970 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.173974 | orchestrator | Tuesday 03 March 2026 01:01:08 +0000 (0:00:01.304) 0:07:45.015 ********* 2026-03-03 01:03:27.173977 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.173980 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.173983 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.173986 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.173989 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.173992 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.173995 | orchestrator | 2026-03-03 01:03:27.174001 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.174004 | orchestrator | Tuesday 03 March 2026 01:01:08 +0000 (0:00:00.591) 0:07:45.606 ********* 2026-03-03 01:03:27.174007 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174010 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174041 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174045 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174048 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.174051 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.174054 | orchestrator | 2026-03-03 01:03:27.174057 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.174060 | orchestrator | Tuesday 03 March 2026 01:01:09 +0000 (0:00:00.878) 0:07:46.485 ********* 2026-03-03 01:03:27.174076 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174082 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174087 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174092 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.174097 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.174102 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.174106 | orchestrator | 2026-03-03 01:03:27.174112 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.174116 | orchestrator | Tuesday 03 March 2026 01:01:10 +0000 (0:00:00.647) 0:07:47.132 ********* 2026-03-03 01:03:27.174119 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174122 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174125 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174128 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.174131 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.174134 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.174137 | orchestrator | 2026-03-03 01:03:27.174140 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.174143 | orchestrator | Tuesday 03 March 2026 01:01:10 +0000 (0:00:00.862) 0:07:47.994 ********* 2026-03-03 01:03:27.174146 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174149 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174153 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174156 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.174159 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.174162 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.174165 | orchestrator | 2026-03-03 01:03:27.174168 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.174171 | orchestrator | Tuesday 03 March 2026 01:01:11 +0000 (0:00:00.677) 0:07:48.672 ********* 2026-03-03 01:03:27.174174 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174177 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174180 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174183 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.174186 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.174189 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.174192 | orchestrator | 2026-03-03 01:03:27.174196 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.174199 | orchestrator | Tuesday 03 March 2026 01:01:12 +0000 (0:00:00.811) 0:07:49.484 ********* 2026-03-03 01:03:27.174202 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174205 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174208 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174211 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:03:27.174214 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:03:27.174217 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:03:27.174220 | orchestrator | 2026-03-03 01:03:27.174223 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.174226 | orchestrator | Tuesday 03 March 2026 01:01:13 +0000 (0:00:00.594) 0:07:50.078 ********* 2026-03-03 01:03:27.174236 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174239 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174242 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174245 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174248 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.174251 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.174254 | orchestrator | 2026-03-03 01:03:27.174257 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.174261 | orchestrator | Tuesday 03 March 2026 01:01:13 +0000 (0:00:00.845) 0:07:50.923 ********* 2026-03-03 01:03:27.174264 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174267 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174270 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174276 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174279 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.174282 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.174285 | orchestrator | 2026-03-03 01:03:27.174288 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.174291 | orchestrator | Tuesday 03 March 2026 01:01:14 +0000 (0:00:00.650) 0:07:51.574 ********* 2026-03-03 01:03:27.174294 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174297 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174300 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174303 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174306 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.174309 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.174315 | orchestrator | 2026-03-03 01:03:27.174320 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-03 01:03:27.174324 | orchestrator | Tuesday 03 March 2026 01:01:15 +0000 (0:00:01.323) 0:07:52.898 ********* 2026-03-03 01:03:27.174330 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.174335 | orchestrator | 2026-03-03 01:03:27.174339 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-03 01:03:27.174344 | orchestrator | Tuesday 03 March 2026 01:01:19 +0000 (0:00:03.444) 0:07:56.342 ********* 2026-03-03 01:03:27.174349 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.174353 | orchestrator | 2026-03-03 01:03:27.174358 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-03 01:03:27.174363 | orchestrator | Tuesday 03 March 2026 01:01:21 +0000 (0:00:02.105) 0:07:58.448 ********* 2026-03-03 01:03:27.174367 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.174371 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.174375 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.174380 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174388 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.174393 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.174397 | orchestrator | 2026-03-03 01:03:27.174402 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-03 01:03:27.174406 | orchestrator | Tuesday 03 March 2026 01:01:23 +0000 (0:00:01.999) 0:08:00.447 ********* 2026-03-03 01:03:27.174411 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.174415 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.174420 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.174425 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.174429 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.174434 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.174438 | orchestrator | 2026-03-03 01:03:27.174442 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-03 01:03:27.174447 | orchestrator | Tuesday 03 March 2026 01:01:24 +0000 (0:00:01.072) 0:08:01.519 ********* 2026-03-03 01:03:27.174452 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.174457 | orchestrator | 2026-03-03 01:03:27.174462 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-03 01:03:27.174467 | orchestrator | Tuesday 03 March 2026 01:01:25 +0000 (0:00:01.196) 0:08:02.716 ********* 2026-03-03 01:03:27.174472 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.174477 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.174482 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.174487 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.174492 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.174496 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.174501 | orchestrator | 2026-03-03 01:03:27.174506 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-03 01:03:27.174510 | orchestrator | Tuesday 03 March 2026 01:01:27 +0000 (0:00:01.795) 0:08:04.511 ********* 2026-03-03 01:03:27.174519 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.174523 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.174527 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.174532 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.174537 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.174542 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.174546 | orchestrator | 2026-03-03 01:03:27.174551 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-03 01:03:27.174555 | orchestrator | Tuesday 03 March 2026 01:01:30 +0000 (0:00:02.879) 0:08:07.391 ********* 2026-03-03 01:03:27.174561 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:03:27.174566 | orchestrator | 2026-03-03 01:03:27.174571 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-03 01:03:27.174575 | orchestrator | Tuesday 03 March 2026 01:01:31 +0000 (0:00:01.059) 0:08:08.450 ********* 2026-03-03 01:03:27.174580 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174586 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174591 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174596 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174600 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.174605 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.174609 | orchestrator | 2026-03-03 01:03:27.174613 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-03 01:03:27.174618 | orchestrator | Tuesday 03 March 2026 01:01:32 +0000 (0:00:00.678) 0:08:09.129 ********* 2026-03-03 01:03:27.174623 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.174632 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.174637 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:03:27.174642 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.174647 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:03:27.174651 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:03:27.174656 | orchestrator | 2026-03-03 01:03:27.174660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-03 01:03:27.174665 | orchestrator | Tuesday 03 March 2026 01:01:33 +0000 (0:00:01.840) 0:08:10.969 ********* 2026-03-03 01:03:27.174670 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174675 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174681 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174686 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:03:27.174691 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:03:27.174696 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:03:27.174701 | orchestrator | 2026-03-03 01:03:27.174706 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-03 01:03:27.174711 | orchestrator | 2026-03-03 01:03:27.174715 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.174720 | orchestrator | Tuesday 03 March 2026 01:01:34 +0000 (0:00:00.936) 0:08:11.906 ********* 2026-03-03 01:03:27.174726 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.174731 | orchestrator | 2026-03-03 01:03:27.174736 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.174741 | orchestrator | Tuesday 03 March 2026 01:01:35 +0000 (0:00:00.436) 0:08:12.343 ********* 2026-03-03 01:03:27.174746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.174752 | orchestrator | 2026-03-03 01:03:27.174758 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.174762 | orchestrator | Tuesday 03 March 2026 01:01:36 +0000 (0:00:00.694) 0:08:13.037 ********* 2026-03-03 01:03:27.174765 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174768 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174775 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174778 | orchestrator | 2026-03-03 01:03:27.174781 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.174787 | orchestrator | Tuesday 03 March 2026 01:01:36 +0000 (0:00:00.296) 0:08:13.333 ********* 2026-03-03 01:03:27.174791 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174794 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174797 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174802 | orchestrator | 2026-03-03 01:03:27.174807 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.174812 | orchestrator | Tuesday 03 March 2026 01:01:36 +0000 (0:00:00.630) 0:08:13.964 ********* 2026-03-03 01:03:27.174818 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174823 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174828 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174832 | orchestrator | 2026-03-03 01:03:27.174836 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.174842 | orchestrator | Tuesday 03 March 2026 01:01:37 +0000 (0:00:00.793) 0:08:14.757 ********* 2026-03-03 01:03:27.174847 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174852 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174857 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174862 | orchestrator | 2026-03-03 01:03:27.174868 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.174873 | orchestrator | Tuesday 03 March 2026 01:01:38 +0000 (0:00:00.673) 0:08:15.431 ********* 2026-03-03 01:03:27.174879 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174884 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174890 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174893 | orchestrator | 2026-03-03 01:03:27.174896 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.174899 | orchestrator | Tuesday 03 March 2026 01:01:38 +0000 (0:00:00.258) 0:08:15.689 ********* 2026-03-03 01:03:27.174903 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174906 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174909 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174912 | orchestrator | 2026-03-03 01:03:27.174915 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.174918 | orchestrator | Tuesday 03 March 2026 01:01:38 +0000 (0:00:00.264) 0:08:15.953 ********* 2026-03-03 01:03:27.174921 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174924 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174927 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174931 | orchestrator | 2026-03-03 01:03:27.174934 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.174937 | orchestrator | Tuesday 03 March 2026 01:01:39 +0000 (0:00:00.416) 0:08:16.370 ********* 2026-03-03 01:03:27.174940 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174943 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174946 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174949 | orchestrator | 2026-03-03 01:03:27.174952 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.174955 | orchestrator | Tuesday 03 March 2026 01:01:39 +0000 (0:00:00.606) 0:08:16.977 ********* 2026-03-03 01:03:27.174959 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.174962 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.174965 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.174968 | orchestrator | 2026-03-03 01:03:27.174971 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.174974 | orchestrator | Tuesday 03 March 2026 01:01:40 +0000 (0:00:00.609) 0:08:17.586 ********* 2026-03-03 01:03:27.174977 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.174980 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.174983 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.174987 | orchestrator | 2026-03-03 01:03:27.174990 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.174999 | orchestrator | Tuesday 03 March 2026 01:01:40 +0000 (0:00:00.275) 0:08:17.862 ********* 2026-03-03 01:03:27.175002 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175009 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175012 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175015 | orchestrator | 2026-03-03 01:03:27.175018 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.175022 | orchestrator | Tuesday 03 March 2026 01:01:41 +0000 (0:00:00.409) 0:08:18.272 ********* 2026-03-03 01:03:27.175025 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175028 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175031 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175034 | orchestrator | 2026-03-03 01:03:27.175037 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.175040 | orchestrator | Tuesday 03 March 2026 01:01:41 +0000 (0:00:00.275) 0:08:18.547 ********* 2026-03-03 01:03:27.175043 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175047 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175050 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175053 | orchestrator | 2026-03-03 01:03:27.175056 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.175059 | orchestrator | Tuesday 03 March 2026 01:01:41 +0000 (0:00:00.294) 0:08:18.842 ********* 2026-03-03 01:03:27.175062 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175078 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175083 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175088 | orchestrator | 2026-03-03 01:03:27.175093 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.175098 | orchestrator | Tuesday 03 March 2026 01:01:42 +0000 (0:00:00.324) 0:08:19.167 ********* 2026-03-03 01:03:27.175103 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175109 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175114 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175118 | orchestrator | 2026-03-03 01:03:27.175122 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.175125 | orchestrator | Tuesday 03 March 2026 01:01:42 +0000 (0:00:00.549) 0:08:19.717 ********* 2026-03-03 01:03:27.175128 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175131 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175134 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175137 | orchestrator | 2026-03-03 01:03:27.175140 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.175143 | orchestrator | Tuesday 03 March 2026 01:01:43 +0000 (0:00:00.303) 0:08:20.020 ********* 2026-03-03 01:03:27.175146 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175150 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175155 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175159 | orchestrator | 2026-03-03 01:03:27.175162 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.175165 | orchestrator | Tuesday 03 March 2026 01:01:43 +0000 (0:00:00.286) 0:08:20.306 ********* 2026-03-03 01:03:27.175168 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175171 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175174 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175177 | orchestrator | 2026-03-03 01:03:27.175180 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.175183 | orchestrator | Tuesday 03 March 2026 01:01:43 +0000 (0:00:00.311) 0:08:20.618 ********* 2026-03-03 01:03:27.175187 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175190 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175193 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175196 | orchestrator | 2026-03-03 01:03:27.175199 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-03 01:03:27.175203 | orchestrator | Tuesday 03 March 2026 01:01:44 +0000 (0:00:00.748) 0:08:21.367 ********* 2026-03-03 01:03:27.175209 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175212 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175215 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-03 01:03:27.175218 | orchestrator | 2026-03-03 01:03:27.175222 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-03 01:03:27.175225 | orchestrator | Tuesday 03 March 2026 01:01:44 +0000 (0:00:00.436) 0:08:21.804 ********* 2026-03-03 01:03:27.175228 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.175231 | orchestrator | 2026-03-03 01:03:27.175234 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-03 01:03:27.175237 | orchestrator | Tuesday 03 March 2026 01:01:46 +0000 (0:00:02.130) 0:08:23.935 ********* 2026-03-03 01:03:27.175241 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-03 01:03:27.175246 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175249 | orchestrator | 2026-03-03 01:03:27.175252 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-03 01:03:27.175255 | orchestrator | Tuesday 03 March 2026 01:01:47 +0000 (0:00:00.212) 0:08:24.147 ********* 2026-03-03 01:03:27.175259 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:03:27.175266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:03:27.175269 | orchestrator | 2026-03-03 01:03:27.175272 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-03 01:03:27.175276 | orchestrator | Tuesday 03 March 2026 01:01:55 +0000 (0:00:07.874) 0:08:32.022 ********* 2026-03-03 01:03:27.175281 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:03:27.175285 | orchestrator | 2026-03-03 01:03:27.175288 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-03 01:03:27.175291 | orchestrator | Tuesday 03 March 2026 01:01:59 +0000 (0:00:04.048) 0:08:36.070 ********* 2026-03-03 01:03:27.175294 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.175297 | orchestrator | 2026-03-03 01:03:27.175300 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-03 01:03:27.175304 | orchestrator | Tuesday 03 March 2026 01:01:59 +0000 (0:00:00.464) 0:08:36.535 ********* 2026-03-03 01:03:27.175307 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-03 01:03:27.175310 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-03 01:03:27.175313 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-03 01:03:27.175316 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-03 01:03:27.175319 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-03 01:03:27.175322 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-03 01:03:27.175326 | orchestrator | 2026-03-03 01:03:27.175329 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-03 01:03:27.175332 | orchestrator | Tuesday 03 March 2026 01:02:00 +0000 (0:00:01.133) 0:08:37.668 ********* 2026-03-03 01:03:27.175335 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.175342 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-03 01:03:27.175345 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:03:27.175348 | orchestrator | 2026-03-03 01:03:27.175352 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-03 01:03:27.175355 | orchestrator | Tuesday 03 March 2026 01:02:02 +0000 (0:00:02.009) 0:08:39.677 ********* 2026-03-03 01:03:27.175358 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 01:03:27.175361 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-03 01:03:27.175364 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175369 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 01:03:27.175372 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-03 01:03:27.175376 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175379 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 01:03:27.175382 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-03 01:03:27.175385 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175388 | orchestrator | 2026-03-03 01:03:27.175391 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-03 01:03:27.175394 | orchestrator | Tuesday 03 March 2026 01:02:04 +0000 (0:00:01.397) 0:08:41.075 ********* 2026-03-03 01:03:27.175398 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175401 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175404 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175407 | orchestrator | 2026-03-03 01:03:27.175410 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-03 01:03:27.175413 | orchestrator | Tuesday 03 March 2026 01:02:06 +0000 (0:00:02.243) 0:08:43.318 ********* 2026-03-03 01:03:27.175416 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175420 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175423 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175426 | orchestrator | 2026-03-03 01:03:27.175429 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-03 01:03:27.175432 | orchestrator | Tuesday 03 March 2026 01:02:06 +0000 (0:00:00.316) 0:08:43.635 ********* 2026-03-03 01:03:27.175435 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.175438 | orchestrator | 2026-03-03 01:03:27.175442 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-03 01:03:27.175445 | orchestrator | Tuesday 03 March 2026 01:02:07 +0000 (0:00:00.682) 0:08:44.318 ********* 2026-03-03 01:03:27.175448 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.175451 | orchestrator | 2026-03-03 01:03:27.175454 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-03 01:03:27.175457 | orchestrator | Tuesday 03 March 2026 01:02:07 +0000 (0:00:00.485) 0:08:44.804 ********* 2026-03-03 01:03:27.175460 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175463 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175466 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175470 | orchestrator | 2026-03-03 01:03:27.175473 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-03 01:03:27.175476 | orchestrator | Tuesday 03 March 2026 01:02:08 +0000 (0:00:00.998) 0:08:45.803 ********* 2026-03-03 01:03:27.175479 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175482 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175485 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175488 | orchestrator | 2026-03-03 01:03:27.175492 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-03 01:03:27.175495 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:01.199) 0:08:47.002 ********* 2026-03-03 01:03:27.175498 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175501 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175506 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175510 | orchestrator | 2026-03-03 01:03:27.175513 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-03 01:03:27.175516 | orchestrator | Tuesday 03 March 2026 01:02:11 +0000 (0:00:01.697) 0:08:48.700 ********* 2026-03-03 01:03:27.175519 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175524 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175527 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175530 | orchestrator | 2026-03-03 01:03:27.175533 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-03 01:03:27.175536 | orchestrator | Tuesday 03 March 2026 01:02:13 +0000 (0:00:01.847) 0:08:50.547 ********* 2026-03-03 01:03:27.175540 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175543 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175546 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175549 | orchestrator | 2026-03-03 01:03:27.175552 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-03 01:03:27.175555 | orchestrator | Tuesday 03 March 2026 01:02:14 +0000 (0:00:01.142) 0:08:51.690 ********* 2026-03-03 01:03:27.175558 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175562 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175565 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175568 | orchestrator | 2026-03-03 01:03:27.175571 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-03 01:03:27.175574 | orchestrator | Tuesday 03 March 2026 01:02:15 +0000 (0:00:00.624) 0:08:52.315 ********* 2026-03-03 01:03:27.175577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-03 01:03:27.175580 | orchestrator | 2026-03-03 01:03:27.175584 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-03 01:03:27.175587 | orchestrator | Tuesday 03 March 2026 01:02:16 +0000 (0:00:00.688) 0:08:53.003 ********* 2026-03-03 01:03:27.175590 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175593 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175596 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175599 | orchestrator | 2026-03-03 01:03:27.175602 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-03 01:03:27.175606 | orchestrator | Tuesday 03 March 2026 01:02:16 +0000 (0:00:00.441) 0:08:53.445 ********* 2026-03-03 01:03:27.175609 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.175612 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.175615 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.175618 | orchestrator | 2026-03-03 01:03:27.175621 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-03 01:03:27.175625 | orchestrator | Tuesday 03 March 2026 01:02:17 +0000 (0:00:01.283) 0:08:54.729 ********* 2026-03-03 01:03:27.175628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.175633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.175636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.175639 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175643 | orchestrator | 2026-03-03 01:03:27.175648 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-03 01:03:27.175653 | orchestrator | Tuesday 03 March 2026 01:02:18 +0000 (0:00:01.113) 0:08:55.842 ********* 2026-03-03 01:03:27.175658 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175663 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175668 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175673 | orchestrator | 2026-03-03 01:03:27.175678 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-03 01:03:27.175684 | orchestrator | 2026-03-03 01:03:27.175689 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-03 01:03:27.175695 | orchestrator | Tuesday 03 March 2026 01:02:19 +0000 (0:00:00.672) 0:08:56.515 ********* 2026-03-03 01:03:27.175704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.175709 | orchestrator | 2026-03-03 01:03:27.175714 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-03 01:03:27.175720 | orchestrator | Tuesday 03 March 2026 01:02:19 +0000 (0:00:00.473) 0:08:56.988 ********* 2026-03-03 01:03:27.175725 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.175730 | orchestrator | 2026-03-03 01:03:27.175735 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-03 01:03:27.175741 | orchestrator | Tuesday 03 March 2026 01:02:20 +0000 (0:00:00.844) 0:08:57.833 ********* 2026-03-03 01:03:27.175746 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175751 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175757 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175761 | orchestrator | 2026-03-03 01:03:27.175767 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-03 01:03:27.175771 | orchestrator | Tuesday 03 March 2026 01:02:21 +0000 (0:00:00.342) 0:08:58.175 ********* 2026-03-03 01:03:27.175774 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175778 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175783 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175788 | orchestrator | 2026-03-03 01:03:27.175793 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-03 01:03:27.175798 | orchestrator | Tuesday 03 March 2026 01:02:21 +0000 (0:00:00.811) 0:08:58.987 ********* 2026-03-03 01:03:27.175804 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175809 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175814 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175819 | orchestrator | 2026-03-03 01:03:27.175825 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-03 01:03:27.175830 | orchestrator | Tuesday 03 March 2026 01:02:22 +0000 (0:00:00.854) 0:08:59.842 ********* 2026-03-03 01:03:27.175835 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175841 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175846 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175851 | orchestrator | 2026-03-03 01:03:27.175855 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-03 01:03:27.175858 | orchestrator | Tuesday 03 March 2026 01:02:23 +0000 (0:00:00.982) 0:09:00.824 ********* 2026-03-03 01:03:27.175861 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175864 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175867 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175870 | orchestrator | 2026-03-03 01:03:27.175876 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-03 01:03:27.175879 | orchestrator | Tuesday 03 March 2026 01:02:24 +0000 (0:00:00.315) 0:09:01.139 ********* 2026-03-03 01:03:27.175883 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175886 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175889 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175892 | orchestrator | 2026-03-03 01:03:27.175895 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-03 01:03:27.175898 | orchestrator | Tuesday 03 March 2026 01:02:24 +0000 (0:00:00.321) 0:09:01.461 ********* 2026-03-03 01:03:27.175901 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175904 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175908 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175911 | orchestrator | 2026-03-03 01:03:27.175914 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-03 01:03:27.175917 | orchestrator | Tuesday 03 March 2026 01:02:24 +0000 (0:00:00.343) 0:09:01.805 ********* 2026-03-03 01:03:27.175920 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175923 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175927 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175933 | orchestrator | 2026-03-03 01:03:27.175936 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-03 01:03:27.175939 | orchestrator | Tuesday 03 March 2026 01:02:25 +0000 (0:00:00.966) 0:09:02.771 ********* 2026-03-03 01:03:27.175942 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.175945 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.175949 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.175952 | orchestrator | 2026-03-03 01:03:27.175955 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-03 01:03:27.175958 | orchestrator | Tuesday 03 March 2026 01:02:26 +0000 (0:00:00.613) 0:09:03.384 ********* 2026-03-03 01:03:27.175961 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175965 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175968 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175971 | orchestrator | 2026-03-03 01:03:27.175974 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-03 01:03:27.175977 | orchestrator | Tuesday 03 March 2026 01:02:26 +0000 (0:00:00.281) 0:09:03.666 ********* 2026-03-03 01:03:27.175980 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.175983 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.175986 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.175989 | orchestrator | 2026-03-03 01:03:27.175993 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-03 01:03:27.175996 | orchestrator | Tuesday 03 March 2026 01:02:26 +0000 (0:00:00.272) 0:09:03.939 ********* 2026-03-03 01:03:27.175999 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176002 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176006 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176009 | orchestrator | 2026-03-03 01:03:27.176012 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-03 01:03:27.176015 | orchestrator | Tuesday 03 March 2026 01:02:27 +0000 (0:00:00.462) 0:09:04.401 ********* 2026-03-03 01:03:27.176019 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176022 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176025 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176028 | orchestrator | 2026-03-03 01:03:27.176031 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-03 01:03:27.176034 | orchestrator | Tuesday 03 March 2026 01:02:27 +0000 (0:00:00.286) 0:09:04.687 ********* 2026-03-03 01:03:27.176037 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176041 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176044 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176047 | orchestrator | 2026-03-03 01:03:27.176050 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-03 01:03:27.176053 | orchestrator | Tuesday 03 March 2026 01:02:28 +0000 (0:00:00.321) 0:09:05.009 ********* 2026-03-03 01:03:27.176056 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176059 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176063 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176089 | orchestrator | 2026-03-03 01:03:27.176092 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-03 01:03:27.176096 | orchestrator | Tuesday 03 March 2026 01:02:28 +0000 (0:00:00.314) 0:09:05.324 ********* 2026-03-03 01:03:27.176099 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176102 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176105 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176108 | orchestrator | 2026-03-03 01:03:27.176111 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-03 01:03:27.176114 | orchestrator | Tuesday 03 March 2026 01:02:28 +0000 (0:00:00.452) 0:09:05.776 ********* 2026-03-03 01:03:27.176117 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176121 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176124 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176127 | orchestrator | 2026-03-03 01:03:27.176130 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-03 01:03:27.176136 | orchestrator | Tuesday 03 March 2026 01:02:29 +0000 (0:00:00.263) 0:09:06.040 ********* 2026-03-03 01:03:27.176139 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176143 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176146 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176149 | orchestrator | 2026-03-03 01:03:27.176152 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-03 01:03:27.176155 | orchestrator | Tuesday 03 March 2026 01:02:29 +0000 (0:00:00.284) 0:09:06.324 ********* 2026-03-03 01:03:27.176158 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176162 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176165 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176168 | orchestrator | 2026-03-03 01:03:27.176171 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-03 01:03:27.176174 | orchestrator | Tuesday 03 March 2026 01:02:29 +0000 (0:00:00.610) 0:09:06.934 ********* 2026-03-03 01:03:27.176178 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.176181 | orchestrator | 2026-03-03 01:03:27.176184 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-03 01:03:27.176189 | orchestrator | Tuesday 03 March 2026 01:02:30 +0000 (0:00:00.499) 0:09:07.434 ********* 2026-03-03 01:03:27.176192 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176196 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-03 01:03:27.176199 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:03:27.176202 | orchestrator | 2026-03-03 01:03:27.176205 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-03 01:03:27.176208 | orchestrator | Tuesday 03 March 2026 01:02:32 +0000 (0:00:01.914) 0:09:09.348 ********* 2026-03-03 01:03:27.176211 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 01:03:27.176215 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-03 01:03:27.176218 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.176221 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 01:03:27.176224 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-03 01:03:27.176227 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.176230 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 01:03:27.176233 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-03 01:03:27.176254 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.176258 | orchestrator | 2026-03-03 01:03:27.176261 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-03 01:03:27.176264 | orchestrator | Tuesday 03 March 2026 01:02:33 +0000 (0:00:01.116) 0:09:10.465 ********* 2026-03-03 01:03:27.176267 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176271 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176274 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176277 | orchestrator | 2026-03-03 01:03:27.176280 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-03 01:03:27.176283 | orchestrator | Tuesday 03 March 2026 01:02:34 +0000 (0:00:00.608) 0:09:11.074 ********* 2026-03-03 01:03:27.176286 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.176289 | orchestrator | 2026-03-03 01:03:27.176292 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-03 01:03:27.176295 | orchestrator | Tuesday 03 March 2026 01:02:34 +0000 (0:00:00.509) 0:09:11.583 ********* 2026-03-03 01:03:27.176300 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.176304 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.176309 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.176312 | orchestrator | 2026-03-03 01:03:27.176316 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-03 01:03:27.176319 | orchestrator | Tuesday 03 March 2026 01:02:35 +0000 (0:00:00.818) 0:09:12.401 ********* 2026-03-03 01:03:27.176322 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176325 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-03 01:03:27.176328 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176331 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-03 01:03:27.176335 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176338 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-03 01:03:27.176341 | orchestrator | 2026-03-03 01:03:27.176344 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-03 01:03:27.176347 | orchestrator | Tuesday 03 March 2026 01:02:40 +0000 (0:00:05.247) 0:09:17.648 ********* 2026-03-03 01:03:27.176350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176353 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:03:27.176356 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176359 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:03:27.176363 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:03:27.176366 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:03:27.176369 | orchestrator | 2026-03-03 01:03:27.176372 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-03 01:03:27.176375 | orchestrator | Tuesday 03 March 2026 01:02:42 +0000 (0:00:02.096) 0:09:19.745 ********* 2026-03-03 01:03:27.176378 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 01:03:27.176381 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.176384 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 01:03:27.176387 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.176390 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 01:03:27.176394 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.176397 | orchestrator | 2026-03-03 01:03:27.176400 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-03 01:03:27.176405 | orchestrator | Tuesday 03 March 2026 01:02:43 +0000 (0:00:01.040) 0:09:20.785 ********* 2026-03-03 01:03:27.176408 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-03 01:03:27.176411 | orchestrator | 2026-03-03 01:03:27.176414 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-03 01:03:27.176417 | orchestrator | Tuesday 03 March 2026 01:02:43 +0000 (0:00:00.203) 0:09:20.989 ********* 2026-03-03 01:03:27.176420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176439 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176442 | orchestrator | 2026-03-03 01:03:27.176445 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-03 01:03:27.176448 | orchestrator | Tuesday 03 March 2026 01:02:44 +0000 (0:00:00.704) 0:09:21.694 ********* 2026-03-03 01:03:27.176451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-03 01:03:27.176469 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176472 | orchestrator | 2026-03-03 01:03:27.176476 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-03 01:03:27.176479 | orchestrator | Tuesday 03 March 2026 01:02:45 +0000 (0:00:00.885) 0:09:22.579 ********* 2026-03-03 01:03:27.176483 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-03 01:03:27.176488 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-03 01:03:27.176494 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-03 01:03:27.176499 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-03 01:03:27.176504 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-03 01:03:27.176510 | orchestrator | 2026-03-03 01:03:27.176515 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-03 01:03:27.176520 | orchestrator | Tuesday 03 March 2026 01:03:12 +0000 (0:00:27.214) 0:09:49.794 ********* 2026-03-03 01:03:27.176526 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176532 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176538 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176544 | orchestrator | 2026-03-03 01:03:27.176549 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-03 01:03:27.176555 | orchestrator | Tuesday 03 March 2026 01:03:13 +0000 (0:00:00.280) 0:09:50.075 ********* 2026-03-03 01:03:27.176561 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176567 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176571 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176574 | orchestrator | 2026-03-03 01:03:27.176577 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-03 01:03:27.176580 | orchestrator | Tuesday 03 March 2026 01:03:13 +0000 (0:00:00.282) 0:09:50.357 ********* 2026-03-03 01:03:27.176583 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.176586 | orchestrator | 2026-03-03 01:03:27.176594 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-03 01:03:27.176597 | orchestrator | Tuesday 03 March 2026 01:03:13 +0000 (0:00:00.606) 0:09:50.964 ********* 2026-03-03 01:03:27.176600 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.176603 | orchestrator | 2026-03-03 01:03:27.176609 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-03 01:03:27.176612 | orchestrator | Tuesday 03 March 2026 01:03:14 +0000 (0:00:00.478) 0:09:51.442 ********* 2026-03-03 01:03:27.176615 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.176618 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.176621 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.176625 | orchestrator | 2026-03-03 01:03:27.176628 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-03 01:03:27.176631 | orchestrator | Tuesday 03 March 2026 01:03:15 +0000 (0:00:01.255) 0:09:52.697 ********* 2026-03-03 01:03:27.176634 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.176637 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.176640 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.176643 | orchestrator | 2026-03-03 01:03:27.176646 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-03 01:03:27.176650 | orchestrator | Tuesday 03 March 2026 01:03:17 +0000 (0:00:01.341) 0:09:54.038 ********* 2026-03-03 01:03:27.176653 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:03:27.176656 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:03:27.176659 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:03:27.176662 | orchestrator | 2026-03-03 01:03:27.176665 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-03 01:03:27.176668 | orchestrator | Tuesday 03 March 2026 01:03:19 +0000 (0:00:02.131) 0:09:56.170 ********* 2026-03-03 01:03:27.176671 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.176675 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.176678 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-03 01:03:27.176681 | orchestrator | 2026-03-03 01:03:27.176684 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-03 01:03:27.176687 | orchestrator | Tuesday 03 March 2026 01:03:21 +0000 (0:00:02.488) 0:09:58.658 ********* 2026-03-03 01:03:27.176690 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176694 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176697 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176700 | orchestrator | 2026-03-03 01:03:27.176705 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-03 01:03:27.176709 | orchestrator | Tuesday 03 March 2026 01:03:21 +0000 (0:00:00.304) 0:09:58.962 ********* 2026-03-03 01:03:27.176712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:03:27.176715 | orchestrator | 2026-03-03 01:03:27.176718 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-03 01:03:27.176721 | orchestrator | Tuesday 03 March 2026 01:03:22 +0000 (0:00:00.444) 0:09:59.406 ********* 2026-03-03 01:03:27.176724 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176728 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176731 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176734 | orchestrator | 2026-03-03 01:03:27.176737 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-03 01:03:27.176740 | orchestrator | Tuesday 03 March 2026 01:03:22 +0000 (0:00:00.440) 0:09:59.846 ********* 2026-03-03 01:03:27.176743 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176746 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:03:27.176751 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:03:27.176755 | orchestrator | 2026-03-03 01:03:27.176758 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-03 01:03:27.176761 | orchestrator | Tuesday 03 March 2026 01:03:23 +0000 (0:00:00.297) 0:10:00.144 ********* 2026-03-03 01:03:27.176764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:03:27.176767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:03:27.176770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:03:27.176773 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:03:27.176776 | orchestrator | 2026-03-03 01:03:27.176780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-03 01:03:27.176783 | orchestrator | Tuesday 03 March 2026 01:03:23 +0000 (0:00:00.541) 0:10:00.685 ********* 2026-03-03 01:03:27.176786 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:03:27.176789 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:03:27.176792 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:03:27.176795 | orchestrator | 2026-03-03 01:03:27.176799 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:03:27.176802 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-03 01:03:27.176806 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-03 01:03:27.176809 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-03 01:03:27.176812 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-03 01:03:27.176815 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-03 01:03:27.176820 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-03 01:03:27.176823 | orchestrator | 2026-03-03 01:03:27.176827 | orchestrator | 2026-03-03 01:03:27.176830 | orchestrator | 2026-03-03 01:03:27.176833 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:03:27.176836 | orchestrator | Tuesday 03 March 2026 01:03:23 +0000 (0:00:00.202) 0:10:00.888 ********* 2026-03-03 01:03:27.176839 | orchestrator | =============================================================================== 2026-03-03 01:03:27.176842 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 37.01s 2026-03-03 01:03:27.176845 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 35.85s 2026-03-03 01:03:27.176849 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 35.75s 2026-03-03 01:03:27.176852 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 27.21s 2026-03-03 01:03:27.176855 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.44s 2026-03-03 01:03:27.176858 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.58s 2026-03-03 01:03:27.176861 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.36s 2026-03-03 01:03:27.176864 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.43s 2026-03-03 01:03:27.176867 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.87s 2026-03-03 01:03:27.176870 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.84s 2026-03-03 01:03:27.176873 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 6.42s 2026-03-03 01:03:27.176877 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.34s 2026-03-03 01:03:27.176882 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.25s 2026-03-03 01:03:27.176885 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.88s 2026-03-03 01:03:27.176888 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.70s 2026-03-03 01:03:27.176891 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.19s 2026-03-03 01:03:27.176896 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.05s 2026-03-03 01:03:27.176899 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.83s 2026-03-03 01:03:27.176902 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.44s 2026-03-03 01:03:27.176906 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 2.99s 2026-03-03 01:03:27.176909 | orchestrator | 2026-03-03 01:03:27 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:27.176912 | orchestrator | 2026-03-03 01:03:27 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:27.176915 | orchestrator | 2026-03-03 01:03:27 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:27.176919 | orchestrator | 2026-03-03 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:30.201356 | orchestrator | 2026-03-03 01:03:30 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:30.204534 | orchestrator | 2026-03-03 01:03:30 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:30.206714 | orchestrator | 2026-03-03 01:03:30 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:30.207003 | orchestrator | 2026-03-03 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:33.239522 | orchestrator | 2026-03-03 01:03:33 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:33.243305 | orchestrator | 2026-03-03 01:03:33 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:33.245238 | orchestrator | 2026-03-03 01:03:33 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:33.245376 | orchestrator | 2026-03-03 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:36.283425 | orchestrator | 2026-03-03 01:03:36 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:36.283511 | orchestrator | 2026-03-03 01:03:36 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:36.285797 | orchestrator | 2026-03-03 01:03:36 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:36.285879 | orchestrator | 2026-03-03 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:39.344916 | orchestrator | 2026-03-03 01:03:39 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:39.347359 | orchestrator | 2026-03-03 01:03:39 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:39.349770 | orchestrator | 2026-03-03 01:03:39 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:39.349845 | orchestrator | 2026-03-03 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:42.404066 | orchestrator | 2026-03-03 01:03:42 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:42.406386 | orchestrator | 2026-03-03 01:03:42 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:42.406821 | orchestrator | 2026-03-03 01:03:42 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:42.407257 | orchestrator | 2026-03-03 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:45.444435 | orchestrator | 2026-03-03 01:03:45 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:45.445967 | orchestrator | 2026-03-03 01:03:45 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:45.448276 | orchestrator | 2026-03-03 01:03:45 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:45.448321 | orchestrator | 2026-03-03 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:48.492512 | orchestrator | 2026-03-03 01:03:48 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:48.495337 | orchestrator | 2026-03-03 01:03:48 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:48.496672 | orchestrator | 2026-03-03 01:03:48 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:48.497091 | orchestrator | 2026-03-03 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:51.536560 | orchestrator | 2026-03-03 01:03:51 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:51.536634 | orchestrator | 2026-03-03 01:03:51 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:51.537613 | orchestrator | 2026-03-03 01:03:51 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:51.537658 | orchestrator | 2026-03-03 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:54.586216 | orchestrator | 2026-03-03 01:03:54 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:54.586278 | orchestrator | 2026-03-03 01:03:54 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:54.587208 | orchestrator | 2026-03-03 01:03:54 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:54.587243 | orchestrator | 2026-03-03 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:03:57.631399 | orchestrator | 2026-03-03 01:03:57 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:03:57.633957 | orchestrator | 2026-03-03 01:03:57 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:03:57.635421 | orchestrator | 2026-03-03 01:03:57 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:03:57.635464 | orchestrator | 2026-03-03 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:00.679613 | orchestrator | 2026-03-03 01:04:00 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:00.681705 | orchestrator | 2026-03-03 01:04:00 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:00.683966 | orchestrator | 2026-03-03 01:04:00 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:04:00.684066 | orchestrator | 2026-03-03 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:03.729696 | orchestrator | 2026-03-03 01:04:03 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:03.732469 | orchestrator | 2026-03-03 01:04:03 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:03.734804 | orchestrator | 2026-03-03 01:04:03 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:04:03.734899 | orchestrator | 2026-03-03 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:06.775264 | orchestrator | 2026-03-03 01:04:06 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:06.776558 | orchestrator | 2026-03-03 01:04:06 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:06.778440 | orchestrator | 2026-03-03 01:04:06 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state STARTED 2026-03-03 01:04:06.778474 | orchestrator | 2026-03-03 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:09.819324 | orchestrator | 2026-03-03 01:04:09 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:09.822060 | orchestrator | 2026-03-03 01:04:09 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:09.825683 | orchestrator | 2026-03-03 01:04:09 | INFO  | Task 02f838b7-0822-43c1-b379-670479d17e87 is in state SUCCESS 2026-03-03 01:04:09.826745 | orchestrator | 2026-03-03 01:04:09.826783 | orchestrator | 2026-03-03 01:04:09.826789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:04:09.826795 | orchestrator | 2026-03-03 01:04:09.826801 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:04:09.826806 | orchestrator | Tuesday 03 March 2026 01:01:51 +0000 (0:00:00.252) 0:00:00.252 ********* 2026-03-03 01:04:09.826811 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:09.826817 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:09.826822 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:09.826827 | orchestrator | 2026-03-03 01:04:09.826831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:04:09.826836 | orchestrator | Tuesday 03 March 2026 01:01:52 +0000 (0:00:00.285) 0:00:00.538 ********* 2026-03-03 01:04:09.826842 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-03 01:04:09.826847 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-03 01:04:09.826852 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-03 01:04:09.826857 | orchestrator | 2026-03-03 01:04:09.826861 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-03 01:04:09.826866 | orchestrator | 2026-03-03 01:04:09.826871 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-03 01:04:09.826876 | orchestrator | Tuesday 03 March 2026 01:01:52 +0000 (0:00:00.450) 0:00:00.988 ********* 2026-03-03 01:04:09.826880 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:04:09.826885 | orchestrator | 2026-03-03 01:04:09.826890 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-03 01:04:09.826907 | orchestrator | Tuesday 03 March 2026 01:01:53 +0000 (0:00:00.496) 0:00:01.484 ********* 2026-03-03 01:04:09.826911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-03 01:04:09.826915 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-03 01:04:09.826919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-03 01:04:09.826923 | orchestrator | 2026-03-03 01:04:09.826926 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-03 01:04:09.826930 | orchestrator | Tuesday 03 March 2026 01:01:54 +0000 (0:00:01.600) 0:00:03.085 ********* 2026-03-03 01:04:09.826936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827134 | orchestrator | 2026-03-03 01:04:09.827138 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-03 01:04:09.827142 | orchestrator | Tuesday 03 March 2026 01:01:56 +0000 (0:00:01.864) 0:00:04.950 ********* 2026-03-03 01:04:09.827146 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:04:09.827150 | orchestrator | 2026-03-03 01:04:09.827154 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-03 01:04:09.827158 | orchestrator | Tuesday 03 March 2026 01:01:57 +0000 (0:00:00.518) 0:00:05.469 ********* 2026-03-03 01:04:09.827168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827205 | orchestrator | 2026-03-03 01:04:09.827212 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-03 01:04:09.827216 | orchestrator | Tuesday 03 March 2026 01:01:59 +0000 (0:00:02.745) 0:00:08.214 ********* 2026-03-03 01:04:09.827220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:04:09.827238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:04:09.827243 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:09.827247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:04:09.827255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:04:09.827259 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:09.827266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:04:09.827273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:04:09.827278 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:09.827282 | orchestrator | 2026-03-03 01:04:09.827285 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-03 01:04:09.827289 | orchestrator | Tuesday 03 March 2026 01:02:00 +0000 (0:00:01.007) 0:00:09.221 ********* 2026-03-03 01:04:09.827294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:04:09.827301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:04:09.827306 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:09.827315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:04:09.827320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:04:09.827324 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:09.827328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-03 01:04:09.827336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-03 01:04:09.827341 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:09.827345 | orchestrator | 2026-03-03 01:04:09.827351 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-03 01:04:09.827355 | orchestrator | Tuesday 03 March 2026 01:02:01 +0000 (0:00:00.724) 0:00:09.946 ********* 2026-03-03 01:04:09.827362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827399 | orchestrator | 2026-03-03 01:04:09.827403 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-03 01:04:09.827407 | orchestrator | Tuesday 03 March 2026 01:02:03 +0000 (0:00:02.262) 0:00:12.208 ********* 2026-03-03 01:04:09.827411 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:09.827415 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:09.827419 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:09.827423 | orchestrator | 2026-03-03 01:04:09.827427 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-03 01:04:09.827430 | orchestrator | Tuesday 03 March 2026 01:02:06 +0000 (0:00:02.269) 0:00:14.477 ********* 2026-03-03 01:04:09.827434 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:09.827438 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:09.827442 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:09.827446 | orchestrator | 2026-03-03 01:04:09.827450 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-03 01:04:09.827453 | orchestrator | Tuesday 03 March 2026 01:02:08 +0000 (0:00:02.091) 0:00:16.569 ********* 2026-03-03 01:04:09.827457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'co2026-03-03 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:09.827473 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-03 01:04:09.827485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-03 01:04:09.827506 | orchestrator | 2026-03-03 01:04:09.827509 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-03 01:04:09.827513 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:01.797) 0:00:18.367 ********* 2026-03-03 01:04:09.827517 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:09.827521 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:09.827525 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:09.827529 | orchestrator | 2026-03-03 01:04:09.827532 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-03 01:04:09.827539 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:00.262) 0:00:18.629 ********* 2026-03-03 01:04:09.827543 | orchestrator | 2026-03-03 01:04:09.827546 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-03 01:04:09.827550 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:00.059) 0:00:18.689 ********* 2026-03-03 01:04:09.827554 | orchestrator | 2026-03-03 01:04:09.827558 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-03 01:04:09.827562 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:00.060) 0:00:18.749 ********* 2026-03-03 01:04:09.827566 | orchestrator | 2026-03-03 01:04:09.827569 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-03 01:04:09.827573 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:00.063) 0:00:18.812 ********* 2026-03-03 01:04:09.827577 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:09.827581 | orchestrator | 2026-03-03 01:04:09.827585 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-03 01:04:09.827589 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:00.478) 0:00:19.291 ********* 2026-03-03 01:04:09.827592 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:09.827596 | orchestrator | 2026-03-03 01:04:09.827600 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-03 01:04:09.827604 | orchestrator | Tuesday 03 March 2026 01:02:11 +0000 (0:00:00.185) 0:00:19.476 ********* 2026-03-03 01:04:09.827608 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:09.827612 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:09.827615 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:09.827619 | orchestrator | 2026-03-03 01:04:09.827623 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-03 01:04:09.827627 | orchestrator | Tuesday 03 March 2026 01:02:55 +0000 (0:00:44.544) 0:01:04.021 ********* 2026-03-03 01:04:09.827631 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:09.827635 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:09.827638 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:09.827642 | orchestrator | 2026-03-03 01:04:09.827646 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-03 01:04:09.827650 | orchestrator | Tuesday 03 March 2026 01:03:57 +0000 (0:01:01.581) 0:02:05.602 ********* 2026-03-03 01:04:09.827654 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:04:09.827658 | orchestrator | 2026-03-03 01:04:09.827661 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-03 01:04:09.827671 | orchestrator | Tuesday 03 March 2026 01:03:57 +0000 (0:00:00.591) 0:02:06.194 ********* 2026-03-03 01:04:09.827675 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:09.827679 | orchestrator | 2026-03-03 01:04:09.827683 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-03 01:04:09.827687 | orchestrator | Tuesday 03 March 2026 01:04:00 +0000 (0:00:02.281) 0:02:08.476 ********* 2026-03-03 01:04:09.827690 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:09.827694 | orchestrator | 2026-03-03 01:04:09.827698 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-03 01:04:09.827702 | orchestrator | Tuesday 03 March 2026 01:04:02 +0000 (0:00:01.878) 0:02:10.354 ********* 2026-03-03 01:04:09.827706 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:09.827710 | orchestrator | 2026-03-03 01:04:09.827713 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-03 01:04:09.827717 | orchestrator | Tuesday 03 March 2026 01:04:04 +0000 (0:00:02.032) 0:02:12.387 ********* 2026-03-03 01:04:09.827721 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:09.827726 | orchestrator | 2026-03-03 01:04:09.827731 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-03 01:04:09.827735 | orchestrator | Tuesday 03 March 2026 01:04:06 +0000 (0:00:02.295) 0:02:14.683 ********* 2026-03-03 01:04:09.827740 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:09.827744 | orchestrator | 2026-03-03 01:04:09.827749 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:04:09.827753 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:04:09.827762 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 01:04:09.827766 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-03 01:04:09.827771 | orchestrator | 2026-03-03 01:04:09.827775 | orchestrator | 2026-03-03 01:04:09.827780 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:04:09.827784 | orchestrator | Tuesday 03 March 2026 01:04:08 +0000 (0:00:02.586) 0:02:17.269 ********* 2026-03-03 01:04:09.827789 | orchestrator | =============================================================================== 2026-03-03 01:04:09.827793 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 61.58s 2026-03-03 01:04:09.827798 | orchestrator | opensearch : Restart opensearch container ------------------------------ 44.54s 2026-03-03 01:04:09.827802 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.75s 2026-03-03 01:04:09.827807 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.59s 2026-03-03 01:04:09.827811 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.30s 2026-03-03 01:04:09.827816 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.28s 2026-03-03 01:04:09.827821 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.27s 2026-03-03 01:04:09.827825 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.26s 2026-03-03 01:04:09.827830 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.09s 2026-03-03 01:04:09.827837 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.03s 2026-03-03 01:04:09.827841 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 1.88s 2026-03-03 01:04:09.827845 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.86s 2026-03-03 01:04:09.827850 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.80s 2026-03-03 01:04:09.827855 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.60s 2026-03-03 01:04:09.827863 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.01s 2026-03-03 01:04:09.827867 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.72s 2026-03-03 01:04:09.827872 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-03-03 01:04:09.827876 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-03 01:04:09.827881 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-03-03 01:04:09.827885 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.48s 2026-03-03 01:04:12.870763 | orchestrator | 2026-03-03 01:04:12 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:12.872686 | orchestrator | 2026-03-03 01:04:12 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:12.872758 | orchestrator | 2026-03-03 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:15.919408 | orchestrator | 2026-03-03 01:04:15 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:15.924786 | orchestrator | 2026-03-03 01:04:15 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:15.924874 | orchestrator | 2026-03-03 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:18.961877 | orchestrator | 2026-03-03 01:04:18 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:18.963694 | orchestrator | 2026-03-03 01:04:18 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:18.963735 | orchestrator | 2026-03-03 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:22.003955 | orchestrator | 2026-03-03 01:04:22 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:22.006393 | orchestrator | 2026-03-03 01:04:22 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:22.006442 | orchestrator | 2026-03-03 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:25.046560 | orchestrator | 2026-03-03 01:04:25 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:25.049141 | orchestrator | 2026-03-03 01:04:25 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:25.049228 | orchestrator | 2026-03-03 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:28.092838 | orchestrator | 2026-03-03 01:04:28 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:28.094915 | orchestrator | 2026-03-03 01:04:28 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:28.094965 | orchestrator | 2026-03-03 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:31.143945 | orchestrator | 2026-03-03 01:04:31 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:31.145110 | orchestrator | 2026-03-03 01:04:31 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:31.145146 | orchestrator | 2026-03-03 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:34.185841 | orchestrator | 2026-03-03 01:04:34 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:34.187635 | orchestrator | 2026-03-03 01:04:34 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:34.187683 | orchestrator | 2026-03-03 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:37.230483 | orchestrator | 2026-03-03 01:04:37 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:37.232065 | orchestrator | 2026-03-03 01:04:37 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:37.232160 | orchestrator | 2026-03-03 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:40.280133 | orchestrator | 2026-03-03 01:04:40 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:40.291245 | orchestrator | 2026-03-03 01:04:40 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state STARTED 2026-03-03 01:04:40.291319 | orchestrator | 2026-03-03 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:43.343641 | orchestrator | 2026-03-03 01:04:43 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:43.347197 | orchestrator | 2026-03-03 01:04:43 | INFO  | Task 80728bd4-ad03-4eed-bbfe-116d7209ea7c is in state SUCCESS 2026-03-03 01:04:43.348102 | orchestrator | 2026-03-03 01:04:43.348184 | orchestrator | 2026-03-03 01:04:43.348192 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-03 01:04:43.348199 | orchestrator | 2026-03-03 01:04:43.348206 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-03 01:04:43.348213 | orchestrator | Tuesday 03 March 2026 01:01:51 +0000 (0:00:00.088) 0:00:00.088 ********* 2026-03-03 01:04:43.348219 | orchestrator | ok: [localhost] => { 2026-03-03 01:04:43.348227 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-03 01:04:43.348234 | orchestrator | } 2026-03-03 01:04:43.348241 | orchestrator | 2026-03-03 01:04:43.348247 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-03 01:04:43.348254 | orchestrator | Tuesday 03 March 2026 01:01:51 +0000 (0:00:00.056) 0:00:00.145 ********* 2026-03-03 01:04:43.348261 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-03 01:04:43.348270 | orchestrator | ...ignoring 2026-03-03 01:04:43.348276 | orchestrator | 2026-03-03 01:04:43.348283 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-03 01:04:43.348289 | orchestrator | Tuesday 03 March 2026 01:01:54 +0000 (0:00:02.879) 0:00:03.025 ********* 2026-03-03 01:04:43.348295 | orchestrator | skipping: [localhost] 2026-03-03 01:04:43.348302 | orchestrator | 2026-03-03 01:04:43.348308 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-03 01:04:43.348314 | orchestrator | Tuesday 03 March 2026 01:01:54 +0000 (0:00:00.067) 0:00:03.092 ********* 2026-03-03 01:04:43.348320 | orchestrator | ok: [localhost] 2026-03-03 01:04:43.348326 | orchestrator | 2026-03-03 01:04:43.348333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:04:43.348338 | orchestrator | 2026-03-03 01:04:43.348345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:04:43.348351 | orchestrator | Tuesday 03 March 2026 01:01:55 +0000 (0:00:00.184) 0:00:03.277 ********* 2026-03-03 01:04:43.348357 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.348364 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.348370 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.348375 | orchestrator | 2026-03-03 01:04:43.348381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:04:43.348387 | orchestrator | Tuesday 03 March 2026 01:01:55 +0000 (0:00:00.570) 0:00:03.847 ********* 2026-03-03 01:04:43.348393 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-03 01:04:43.348400 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-03 01:04:43.348407 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-03 01:04:43.348429 | orchestrator | 2026-03-03 01:04:43.348435 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-03 01:04:43.348466 | orchestrator | 2026-03-03 01:04:43.348473 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-03 01:04:43.348479 | orchestrator | Tuesday 03 March 2026 01:01:56 +0000 (0:00:00.536) 0:00:04.384 ********* 2026-03-03 01:04:43.348484 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-03 01:04:43.348491 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-03 01:04:43.348496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-03 01:04:43.348502 | orchestrator | 2026-03-03 01:04:43.348508 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-03 01:04:43.348513 | orchestrator | Tuesday 03 March 2026 01:01:56 +0000 (0:00:00.371) 0:00:04.756 ********* 2026-03-03 01:04:43.348520 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:04:43.348527 | orchestrator | 2026-03-03 01:04:43.348534 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-03 01:04:43.348540 | orchestrator | Tuesday 03 March 2026 01:01:57 +0000 (0:00:00.523) 0:00:05.279 ********* 2026-03-03 01:04:43.348583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.348606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.348631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.348638 | orchestrator | 2026-03-03 01:04:43.348649 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-03 01:04:43.348656 | orchestrator | Tuesday 03 March 2026 01:01:59 +0000 (0:00:02.731) 0:00:08.010 ********* 2026-03-03 01:04:43.348662 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.348669 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.348674 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.348680 | orchestrator | 2026-03-03 01:04:43.348686 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-03 01:04:43.348692 | orchestrator | Tuesday 03 March 2026 01:02:00 +0000 (0:00:00.671) 0:00:08.682 ********* 2026-03-03 01:04:43.348698 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.348704 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.348720 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.348727 | orchestrator | 2026-03-03 01:04:43.348734 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-03 01:04:43.348740 | orchestrator | Tuesday 03 March 2026 01:02:01 +0000 (0:00:01.410) 0:00:10.093 ********* 2026-03-03 01:04:43.348746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.348769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.348778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.348790 | orchestrator | 2026-03-03 01:04:43.348796 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-03 01:04:43.348802 | orchestrator | Tuesday 03 March 2026 01:02:04 +0000 (0:00:03.001) 0:00:13.094 ********* 2026-03-03 01:04:43.348809 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.348815 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.348821 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.348827 | orchestrator | 2026-03-03 01:04:43.348833 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-03 01:04:43.348839 | orchestrator | Tuesday 03 March 2026 01:02:05 +0000 (0:00:01.097) 0:00:14.192 ********* 2026-03-03 01:04:43.348845 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.348852 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:43.348858 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:43.348864 | orchestrator | 2026-03-03 01:04:43.348870 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-03 01:04:43.348876 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:04.262) 0:00:18.454 ********* 2026-03-03 01:04:43.348884 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:04:43.348890 | orchestrator | 2026-03-03 01:04:43.348896 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-03 01:04:43.348901 | orchestrator | Tuesday 03 March 2026 01:02:10 +0000 (0:00:00.487) 0:00:18.941 ********* 2026-03-03 01:04:43.348918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.348975 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.348985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.348993 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.349016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349029 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.349035 | orchestrator | 2026-03-03 01:04:43.349042 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-03 01:04:43.349048 | orchestrator | Tuesday 03 March 2026 01:02:13 +0000 (0:00:02.890) 0:00:21.831 ********* 2026-03-03 01:04:43.349055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349062 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.349077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349089 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.349096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349103 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.349355 | orchestrator | 2026-03-03 01:04:43.349372 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-03 01:04:43.349379 | orchestrator | Tuesday 03 March 2026 01:02:16 +0000 (0:00:03.000) 0:00:24.832 ********* 2026-03-03 01:04:43.349395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349412 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.349424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349429 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.349433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-03 01:04:43.349437 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.349441 | orchestrator | 2026-03-03 01:04:43.349449 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-03 01:04:43.349457 | orchestrator | Tuesday 03 March 2026 01:02:19 +0000 (0:00:02.766) 0:00:27.599 ********* 2026-03-03 01:04:43.349712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.349731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.349751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-03 01:04:43.349765 | orchestrator | 2026-03-03 01:04:43.349771 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-03 01:04:43.349777 | orchestrator | Tuesday 03 March 2026 01:02:23 +0000 (0:00:03.873) 0:00:31.473 ********* 2026-03-03 01:04:43.349783 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.349790 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:43.349799 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:43.349806 | orchestrator | 2026-03-03 01:04:43.349812 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-03 01:04:43.349818 | orchestrator | Tuesday 03 March 2026 01:02:24 +0000 (0:00:01.191) 0:00:32.664 ********* 2026-03-03 01:04:43.349825 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.349833 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.349842 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.349849 | orchestrator | 2026-03-03 01:04:43.349855 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-03 01:04:43.349861 | orchestrator | Tuesday 03 March 2026 01:02:24 +0000 (0:00:00.360) 0:00:33.025 ********* 2026-03-03 01:04:43.349867 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.349873 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.349880 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.349886 | orchestrator | 2026-03-03 01:04:43.349893 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-03 01:04:43.349899 | orchestrator | Tuesday 03 March 2026 01:02:25 +0000 (0:00:00.508) 0:00:33.533 ********* 2026-03-03 01:04:43.349904 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-03 01:04:43.349909 | orchestrator | ...ignoring 2026-03-03 01:04:43.349913 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-03 01:04:43.349917 | orchestrator | ...ignoring 2026-03-03 01:04:43.349923 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-03 01:04:43.349929 | orchestrator | ...ignoring 2026-03-03 01:04:43.349937 | orchestrator | 2026-03-03 01:04:43.349945 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-03 01:04:43.349951 | orchestrator | Tuesday 03 March 2026 01:02:36 +0000 (0:00:11.091) 0:00:44.625 ********* 2026-03-03 01:04:43.349963 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.349969 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.349975 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.349980 | orchestrator | 2026-03-03 01:04:43.349987 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-03 01:04:43.349993 | orchestrator | Tuesday 03 March 2026 01:02:36 +0000 (0:00:00.439) 0:00:45.064 ********* 2026-03-03 01:04:43.350001 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350009 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350078 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350087 | orchestrator | 2026-03-03 01:04:43.350094 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-03 01:04:43.350102 | orchestrator | Tuesday 03 March 2026 01:02:37 +0000 (0:00:00.623) 0:00:45.687 ********* 2026-03-03 01:04:43.350242 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350252 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350259 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350266 | orchestrator | 2026-03-03 01:04:43.350273 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-03 01:04:43.350280 | orchestrator | Tuesday 03 March 2026 01:02:37 +0000 (0:00:00.406) 0:00:46.094 ********* 2026-03-03 01:04:43.350286 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350293 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350299 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350308 | orchestrator | 2026-03-03 01:04:43.350323 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-03 01:04:43.350330 | orchestrator | Tuesday 03 March 2026 01:02:38 +0000 (0:00:00.440) 0:00:46.535 ********* 2026-03-03 01:04:43.350337 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.350344 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.350353 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.350362 | orchestrator | 2026-03-03 01:04:43.350369 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-03 01:04:43.350376 | orchestrator | Tuesday 03 March 2026 01:02:38 +0000 (0:00:00.422) 0:00:46.957 ********* 2026-03-03 01:04:43.350392 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350399 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350405 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350412 | orchestrator | 2026-03-03 01:04:43.350419 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-03 01:04:43.350425 | orchestrator | Tuesday 03 March 2026 01:02:39 +0000 (0:00:00.641) 0:00:47.599 ********* 2026-03-03 01:04:43.350430 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350437 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350445 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-03 01:04:43.350452 | orchestrator | 2026-03-03 01:04:43.350459 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-03 01:04:43.350465 | orchestrator | Tuesday 03 March 2026 01:02:39 +0000 (0:00:00.381) 0:00:47.980 ********* 2026-03-03 01:04:43.350472 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.350479 | orchestrator | 2026-03-03 01:04:43.350488 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-03 01:04:43.350496 | orchestrator | Tuesday 03 March 2026 01:02:48 +0000 (0:00:09.112) 0:00:57.093 ********* 2026-03-03 01:04:43.350502 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.350509 | orchestrator | 2026-03-03 01:04:43.350515 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-03 01:04:43.350522 | orchestrator | Tuesday 03 March 2026 01:02:49 +0000 (0:00:00.118) 0:00:57.211 ********* 2026-03-03 01:04:43.350528 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350535 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350542 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350551 | orchestrator | 2026-03-03 01:04:43.350559 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-03 01:04:43.350574 | orchestrator | Tuesday 03 March 2026 01:02:49 +0000 (0:00:00.863) 0:00:58.075 ********* 2026-03-03 01:04:43.350581 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.350587 | orchestrator | 2026-03-03 01:04:43.350594 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-03 01:04:43.350601 | orchestrator | Tuesday 03 March 2026 01:02:56 +0000 (0:00:07.047) 0:01:05.122 ********* 2026-03-03 01:04:43.350607 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.350613 | orchestrator | 2026-03-03 01:04:43.350620 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-03 01:04:43.350627 | orchestrator | Tuesday 03 March 2026 01:02:58 +0000 (0:00:01.592) 0:01:06.714 ********* 2026-03-03 01:04:43.350633 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.350640 | orchestrator | 2026-03-03 01:04:43.350646 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-03 01:04:43.350653 | orchestrator | Tuesday 03 March 2026 01:03:00 +0000 (0:00:02.389) 0:01:09.103 ********* 2026-03-03 01:04:43.350659 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.350667 | orchestrator | 2026-03-03 01:04:43.350672 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-03 01:04:43.350676 | orchestrator | Tuesday 03 March 2026 01:03:01 +0000 (0:00:00.139) 0:01:09.243 ********* 2026-03-03 01:04:43.350681 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350685 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.350690 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.350694 | orchestrator | 2026-03-03 01:04:43.350699 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-03 01:04:43.350704 | orchestrator | Tuesday 03 March 2026 01:03:01 +0000 (0:00:00.369) 0:01:09.612 ********* 2026-03-03 01:04:43.350708 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.350712 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:43.350716 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:43.350719 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-03 01:04:43.350723 | orchestrator | 2026-03-03 01:04:43.350727 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-03 01:04:43.350731 | orchestrator | skipping: no hosts matched 2026-03-03 01:04:43.350735 | orchestrator | 2026-03-03 01:04:43.350739 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-03 01:04:43.350743 | orchestrator | 2026-03-03 01:04:43.350746 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-03 01:04:43.350750 | orchestrator | Tuesday 03 March 2026 01:03:01 +0000 (0:00:00.465) 0:01:10.078 ********* 2026-03-03 01:04:43.350754 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:04:43.350758 | orchestrator | 2026-03-03 01:04:43.350761 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-03 01:04:43.350765 | orchestrator | Tuesday 03 March 2026 01:03:22 +0000 (0:00:20.569) 0:01:30.648 ********* 2026-03-03 01:04:43.350769 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.350773 | orchestrator | 2026-03-03 01:04:43.350777 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-03 01:04:43.350780 | orchestrator | Tuesday 03 March 2026 01:03:32 +0000 (0:00:10.511) 0:01:41.159 ********* 2026-03-03 01:04:43.350784 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.350788 | orchestrator | 2026-03-03 01:04:43.350792 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-03 01:04:43.350795 | orchestrator | 2026-03-03 01:04:43.350875 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-03 01:04:43.350879 | orchestrator | Tuesday 03 March 2026 01:03:35 +0000 (0:00:02.459) 0:01:43.619 ********* 2026-03-03 01:04:43.350883 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:04:43.350887 | orchestrator | 2026-03-03 01:04:43.350891 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-03 01:04:43.350900 | orchestrator | Tuesday 03 March 2026 01:03:52 +0000 (0:00:16.869) 0:02:00.488 ********* 2026-03-03 01:04:43.350909 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.350913 | orchestrator | 2026-03-03 01:04:43.350916 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-03 01:04:43.350920 | orchestrator | Tuesday 03 March 2026 01:04:07 +0000 (0:00:15.556) 0:02:16.045 ********* 2026-03-03 01:04:43.350924 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.350928 | orchestrator | 2026-03-03 01:04:43.350932 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-03 01:04:43.350936 | orchestrator | 2026-03-03 01:04:43.350945 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-03 01:04:43.350949 | orchestrator | Tuesday 03 March 2026 01:04:10 +0000 (0:00:02.353) 0:02:18.398 ********* 2026-03-03 01:04:43.350953 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.350957 | orchestrator | 2026-03-03 01:04:43.350961 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-03 01:04:43.350964 | orchestrator | Tuesday 03 March 2026 01:04:25 +0000 (0:00:15.742) 0:02:34.140 ********* 2026-03-03 01:04:43.350968 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.350972 | orchestrator | 2026-03-03 01:04:43.350976 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-03 01:04:43.350980 | orchestrator | Tuesday 03 March 2026 01:04:26 +0000 (0:00:00.622) 0:02:34.762 ********* 2026-03-03 01:04:43.350984 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.350987 | orchestrator | 2026-03-03 01:04:43.350991 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-03 01:04:43.350995 | orchestrator | 2026-03-03 01:04:43.350999 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-03 01:04:43.351003 | orchestrator | Tuesday 03 March 2026 01:04:29 +0000 (0:00:02.462) 0:02:37.225 ********* 2026-03-03 01:04:43.351007 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:04:43.351011 | orchestrator | 2026-03-03 01:04:43.351015 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-03 01:04:43.351018 | orchestrator | Tuesday 03 March 2026 01:04:29 +0000 (0:00:00.544) 0:02:37.770 ********* 2026-03-03 01:04:43.351022 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.351026 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.351030 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.351034 | orchestrator | 2026-03-03 01:04:43.351038 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-03 01:04:43.351042 | orchestrator | Tuesday 03 March 2026 01:04:31 +0000 (0:00:02.163) 0:02:39.933 ********* 2026-03-03 01:04:43.351045 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.351049 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.351053 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.351057 | orchestrator | 2026-03-03 01:04:43.351061 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-03 01:04:43.351064 | orchestrator | Tuesday 03 March 2026 01:04:33 +0000 (0:00:01.926) 0:02:41.860 ********* 2026-03-03 01:04:43.351068 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.351072 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.351076 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.351080 | orchestrator | 2026-03-03 01:04:43.351084 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-03 01:04:43.351087 | orchestrator | Tuesday 03 March 2026 01:04:35 +0000 (0:00:01.841) 0:02:43.701 ********* 2026-03-03 01:04:43.351091 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.351095 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.351099 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:04:43.351103 | orchestrator | 2026-03-03 01:04:43.351123 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-03 01:04:43.351129 | orchestrator | Tuesday 03 March 2026 01:04:37 +0000 (0:00:01.834) 0:02:45.536 ********* 2026-03-03 01:04:43.351146 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:04:43.351152 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:04:43.351157 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:04:43.351163 | orchestrator | 2026-03-03 01:04:43.351169 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-03 01:04:43.351174 | orchestrator | Tuesday 03 March 2026 01:04:40 +0000 (0:00:03.034) 0:02:48.570 ********* 2026-03-03 01:04:43.351179 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:04:43.351185 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:04:43.351191 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:04:43.351196 | orchestrator | 2026-03-03 01:04:43.351202 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:04:43.351273 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-03 01:04:43.351286 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-03 01:04:43.351294 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-03 01:04:43.351299 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-03 01:04:43.351305 | orchestrator | 2026-03-03 01:04:43.351311 | orchestrator | 2026-03-03 01:04:43.351317 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:04:43.351323 | orchestrator | Tuesday 03 March 2026 01:04:40 +0000 (0:00:00.225) 0:02:48.796 ********* 2026-03-03 01:04:43.351329 | orchestrator | =============================================================================== 2026-03-03 01:04:43.351336 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.44s 2026-03-03 01:04:43.351343 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.07s 2026-03-03 01:04:43.351354 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.74s 2026-03-03 01:04:43.351360 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.09s 2026-03-03 01:04:43.351366 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.11s 2026-03-03 01:04:43.351372 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.05s 2026-03-03 01:04:43.351383 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.81s 2026-03-03 01:04:43.351390 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.26s 2026-03-03 01:04:43.351396 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.87s 2026-03-03 01:04:43.351403 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.03s 2026-03-03 01:04:43.351408 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.00s 2026-03-03 01:04:43.351415 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.00s 2026-03-03 01:04:43.351422 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.89s 2026-03-03 01:04:43.351428 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2026-03-03 01:04:43.351435 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.77s 2026-03-03 01:04:43.351441 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.73s 2026-03-03 01:04:43.351447 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.46s 2026-03-03 01:04:43.351454 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.39s 2026-03-03 01:04:43.351461 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.16s 2026-03-03 01:04:43.351467 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 1.93s 2026-03-03 01:04:43.351480 | orchestrator | 2026-03-03 01:04:43 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:04:43.351487 | orchestrator | 2026-03-03 01:04:43 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:04:43.351493 | orchestrator | 2026-03-03 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:46.395696 | orchestrator | 2026-03-03 01:04:46 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:46.395766 | orchestrator | 2026-03-03 01:04:46 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:04:46.396844 | orchestrator | 2026-03-03 01:04:46 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:04:46.396887 | orchestrator | 2026-03-03 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:49.430995 | orchestrator | 2026-03-03 01:04:49 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:49.433626 | orchestrator | 2026-03-03 01:04:49 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:04:49.434353 | orchestrator | 2026-03-03 01:04:49 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:04:49.434387 | orchestrator | 2026-03-03 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:52.474183 | orchestrator | 2026-03-03 01:04:52 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:52.477572 | orchestrator | 2026-03-03 01:04:52 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:04:52.479197 | orchestrator | 2026-03-03 01:04:52 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:04:52.479248 | orchestrator | 2026-03-03 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:55.516242 | orchestrator | 2026-03-03 01:04:55 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:55.517262 | orchestrator | 2026-03-03 01:04:55 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:04:55.518571 | orchestrator | 2026-03-03 01:04:55 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:04:55.518597 | orchestrator | 2026-03-03 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:04:58.551112 | orchestrator | 2026-03-03 01:04:58 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:04:58.551884 | orchestrator | 2026-03-03 01:04:58 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:04:58.552678 | orchestrator | 2026-03-03 01:04:58 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:04:58.552716 | orchestrator | 2026-03-03 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:01.589634 | orchestrator | 2026-03-03 01:05:01 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:01.592051 | orchestrator | 2026-03-03 01:05:01 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:01.594141 | orchestrator | 2026-03-03 01:05:01 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:01.594238 | orchestrator | 2026-03-03 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:04.637440 | orchestrator | 2026-03-03 01:05:04 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:04.637914 | orchestrator | 2026-03-03 01:05:04 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:04.638949 | orchestrator | 2026-03-03 01:05:04 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:04.639859 | orchestrator | 2026-03-03 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:07.668194 | orchestrator | 2026-03-03 01:05:07 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:07.668783 | orchestrator | 2026-03-03 01:05:07 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:07.669871 | orchestrator | 2026-03-03 01:05:07 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:07.669914 | orchestrator | 2026-03-03 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:10.711860 | orchestrator | 2026-03-03 01:05:10 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:10.712941 | orchestrator | 2026-03-03 01:05:10 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:10.715760 | orchestrator | 2026-03-03 01:05:10 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:10.715825 | orchestrator | 2026-03-03 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:13.756827 | orchestrator | 2026-03-03 01:05:13 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:13.758364 | orchestrator | 2026-03-03 01:05:13 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:13.758974 | orchestrator | 2026-03-03 01:05:13 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:13.758989 | orchestrator | 2026-03-03 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:16.783676 | orchestrator | 2026-03-03 01:05:16 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:16.784020 | orchestrator | 2026-03-03 01:05:16 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:16.786601 | orchestrator | 2026-03-03 01:05:16 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:16.786658 | orchestrator | 2026-03-03 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:19.825074 | orchestrator | 2026-03-03 01:05:19 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:19.826952 | orchestrator | 2026-03-03 01:05:19 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:19.829638 | orchestrator | 2026-03-03 01:05:19 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:19.829684 | orchestrator | 2026-03-03 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:22.866779 | orchestrator | 2026-03-03 01:05:22 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state STARTED 2026-03-03 01:05:22.868762 | orchestrator | 2026-03-03 01:05:22 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:22.871176 | orchestrator | 2026-03-03 01:05:22 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:22.871376 | orchestrator | 2026-03-03 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:25.912180 | orchestrator | 2026-03-03 01:05:25.912238 | orchestrator | 2026-03-03 01:05:25 | INFO  | Task bc95f5b6-fa6f-4041-91bb-fca57d82ebbf is in state SUCCESS 2026-03-03 01:05:25.913747 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-03 01:05:25.913809 | orchestrator | 2.16.14 2026-03-03 01:05:25.913816 | orchestrator | 2026-03-03 01:05:25.913855 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-03 01:05:25.913867 | orchestrator | 2026-03-03 01:05:25.913871 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-03 01:05:25.913875 | orchestrator | Tuesday 03 March 2026 01:03:27 +0000 (0:00:00.516) 0:00:00.516 ********* 2026-03-03 01:05:25.913882 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:05:25.913887 | orchestrator | 2026-03-03 01:05:25.913891 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-03 01:05:25.913895 | orchestrator | Tuesday 03 March 2026 01:03:28 +0000 (0:00:00.561) 0:00:01.077 ********* 2026-03-03 01:05:25.913898 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.913902 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.913906 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.913910 | orchestrator | 2026-03-03 01:05:25.913914 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-03 01:05:25.913917 | orchestrator | Tuesday 03 March 2026 01:03:28 +0000 (0:00:00.582) 0:00:01.660 ********* 2026-03-03 01:05:25.913921 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.913934 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.913939 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.913949 | orchestrator | 2026-03-03 01:05:25.913953 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-03 01:05:25.913956 | orchestrator | Tuesday 03 March 2026 01:03:29 +0000 (0:00:00.269) 0:00:01.930 ********* 2026-03-03 01:05:25.913960 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.913964 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.913968 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.913971 | orchestrator | 2026-03-03 01:05:25.913975 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-03 01:05:25.913979 | orchestrator | Tuesday 03 March 2026 01:03:29 +0000 (0:00:00.650) 0:00:02.580 ********* 2026-03-03 01:05:25.913982 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.913986 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.914000 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.914009 | orchestrator | 2026-03-03 01:05:25.914707 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-03 01:05:25.914725 | orchestrator | Tuesday 03 March 2026 01:03:30 +0000 (0:00:00.276) 0:00:02.856 ********* 2026-03-03 01:05:25.914732 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.914739 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.914746 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.914752 | orchestrator | 2026-03-03 01:05:25.914759 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-03 01:05:25.914765 | orchestrator | Tuesday 03 March 2026 01:03:30 +0000 (0:00:00.287) 0:00:03.144 ********* 2026-03-03 01:05:25.914771 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.914776 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.914785 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.914792 | orchestrator | 2026-03-03 01:05:25.914798 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-03 01:05:25.914805 | orchestrator | Tuesday 03 March 2026 01:03:30 +0000 (0:00:00.278) 0:00:03.423 ********* 2026-03-03 01:05:25.914812 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.914818 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.914825 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.914831 | orchestrator | 2026-03-03 01:05:25.914836 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-03 01:05:25.914842 | orchestrator | Tuesday 03 March 2026 01:03:31 +0000 (0:00:00.393) 0:00:03.816 ********* 2026-03-03 01:05:25.914848 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.914854 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.914861 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.914867 | orchestrator | 2026-03-03 01:05:25.914883 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-03 01:05:25.914890 | orchestrator | Tuesday 03 March 2026 01:03:31 +0000 (0:00:00.262) 0:00:04.078 ********* 2026-03-03 01:05:25.914896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:05:25.914902 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:05:25.914906 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:05:25.914910 | orchestrator | 2026-03-03 01:05:25.914913 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-03 01:05:25.914917 | orchestrator | Tuesday 03 March 2026 01:03:31 +0000 (0:00:00.588) 0:00:04.667 ********* 2026-03-03 01:05:25.914921 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.914925 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.914929 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.914932 | orchestrator | 2026-03-03 01:05:25.914936 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-03 01:05:25.914940 | orchestrator | Tuesday 03 March 2026 01:03:32 +0000 (0:00:00.377) 0:00:05.044 ********* 2026-03-03 01:05:25.914944 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:05:25.914947 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:05:25.914951 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:05:25.914955 | orchestrator | 2026-03-03 01:05:25.914959 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-03 01:05:25.914962 | orchestrator | Tuesday 03 March 2026 01:03:34 +0000 (0:00:01.912) 0:00:06.957 ********* 2026-03-03 01:05:25.914966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-03 01:05:25.914970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-03 01:05:25.914975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-03 01:05:25.914979 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.914983 | orchestrator | 2026-03-03 01:05:25.915012 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-03 01:05:25.915019 | orchestrator | Tuesday 03 March 2026 01:03:34 +0000 (0:00:00.632) 0:00:07.590 ********* 2026-03-03 01:05:25.915037 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915056 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915062 | orchestrator | 2026-03-03 01:05:25.915068 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-03 01:05:25.915074 | orchestrator | Tuesday 03 March 2026 01:03:35 +0000 (0:00:00.831) 0:00:08.421 ********* 2026-03-03 01:05:25.915082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915102 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915107 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915119 | orchestrator | 2026-03-03 01:05:25.915164 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-03 01:05:25.915167 | orchestrator | Tuesday 03 March 2026 01:03:36 +0000 (0:00:00.372) 0:00:08.793 ********* 2026-03-03 01:05:25.915173 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4a2778f86c87', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-03 01:03:32.912894', 'end': '2026-03-03 01:03:32.942038', 'delta': '0:00:00.029144', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4a2778f86c87'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-03 01:05:25.915179 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '95eead2402ce', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-03 01:03:33.517736', 'end': '2026-03-03 01:03:33.543493', 'delta': '0:00:00.025757', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['95eead2402ce'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-03 01:05:25.915200 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9ba225fcad13', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-03 01:03:34.024837', 'end': '2026-03-03 01:03:34.056475', 'delta': '0:00:00.031638', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9ba225fcad13'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-03 01:05:25.915205 | orchestrator | 2026-03-03 01:05:25.915209 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-03 01:05:25.915213 | orchestrator | Tuesday 03 March 2026 01:03:36 +0000 (0:00:00.191) 0:00:08.985 ********* 2026-03-03 01:05:25.915217 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.915221 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.915224 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.915228 | orchestrator | 2026-03-03 01:05:25.915232 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-03 01:05:25.915243 | orchestrator | Tuesday 03 March 2026 01:03:36 +0000 (0:00:00.426) 0:00:09.412 ********* 2026-03-03 01:05:25.915247 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-03 01:05:25.915250 | orchestrator | 2026-03-03 01:05:25.915254 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-03 01:05:25.915258 | orchestrator | Tuesday 03 March 2026 01:03:39 +0000 (0:00:02.992) 0:00:12.404 ********* 2026-03-03 01:05:25.915262 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915266 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915270 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915273 | orchestrator | 2026-03-03 01:05:25.915277 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-03 01:05:25.915281 | orchestrator | Tuesday 03 March 2026 01:03:39 +0000 (0:00:00.266) 0:00:12.671 ********* 2026-03-03 01:05:25.915285 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915289 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915292 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915297 | orchestrator | 2026-03-03 01:05:25.915370 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-03 01:05:25.915386 | orchestrator | Tuesday 03 March 2026 01:03:40 +0000 (0:00:00.356) 0:00:13.028 ********* 2026-03-03 01:05:25.915393 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915400 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915406 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915413 | orchestrator | 2026-03-03 01:05:25.915420 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-03 01:05:25.915427 | orchestrator | Tuesday 03 March 2026 01:03:40 +0000 (0:00:00.384) 0:00:13.412 ********* 2026-03-03 01:05:25.915432 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.915436 | orchestrator | 2026-03-03 01:05:25.915440 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-03 01:05:25.915444 | orchestrator | Tuesday 03 March 2026 01:03:40 +0000 (0:00:00.105) 0:00:13.518 ********* 2026-03-03 01:05:25.915447 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915451 | orchestrator | 2026-03-03 01:05:25.915455 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-03 01:05:25.915459 | orchestrator | Tuesday 03 March 2026 01:03:40 +0000 (0:00:00.211) 0:00:13.729 ********* 2026-03-03 01:05:25.915462 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915466 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915470 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915474 | orchestrator | 2026-03-03 01:05:25.915477 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-03 01:05:25.915481 | orchestrator | Tuesday 03 March 2026 01:03:41 +0000 (0:00:00.260) 0:00:13.990 ********* 2026-03-03 01:05:25.915485 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915489 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915493 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915496 | orchestrator | 2026-03-03 01:05:25.915500 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-03 01:05:25.915504 | orchestrator | Tuesday 03 March 2026 01:03:41 +0000 (0:00:00.281) 0:00:14.271 ********* 2026-03-03 01:05:25.915508 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915511 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915515 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915519 | orchestrator | 2026-03-03 01:05:25.915528 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-03 01:05:25.915532 | orchestrator | Tuesday 03 March 2026 01:03:41 +0000 (0:00:00.393) 0:00:14.664 ********* 2026-03-03 01:05:25.915536 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915539 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915543 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915547 | orchestrator | 2026-03-03 01:05:25.915551 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-03 01:05:25.915559 | orchestrator | Tuesday 03 March 2026 01:03:42 +0000 (0:00:00.290) 0:00:14.954 ********* 2026-03-03 01:05:25.915563 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915567 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915571 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915574 | orchestrator | 2026-03-03 01:05:25.915578 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-03 01:05:25.915582 | orchestrator | Tuesday 03 March 2026 01:03:42 +0000 (0:00:00.276) 0:00:15.231 ********* 2026-03-03 01:05:25.915586 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915590 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915593 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915597 | orchestrator | 2026-03-03 01:05:25.915619 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-03 01:05:25.915624 | orchestrator | Tuesday 03 March 2026 01:03:42 +0000 (0:00:00.280) 0:00:15.511 ********* 2026-03-03 01:05:25.915628 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915632 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915635 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915639 | orchestrator | 2026-03-03 01:05:25.915643 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-03 01:05:25.915650 | orchestrator | Tuesday 03 March 2026 01:03:43 +0000 (0:00:00.418) 0:00:15.929 ********* 2026-03-03 01:05:25.915655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973', 'dm-uuid-LVM-GpJP3SwEqN8IRMzzg27rllwSIVirHlhSfyzZPmY6R0Kn9YDJtp0fc4Q7CuoV0X63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319', 'dm-uuid-LVM-9EM2pLoCc81f2X7Vie2gvZeoKVsOO03V8d2PXcJGe3Ps8WTrewvxmi6DdodPaJYy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hJi2Py-81jO-thE3-PeUa-ee3o-6IJn-t2lTlM', 'scsi-0QEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702', 'scsi-SQEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd', 'dm-uuid-LVM-FzkgkoVfb2RnZHeeixvaBLUlzwoz3GmBKkIXQJrdo7uwaev79qNVS5X3yHIcAGus'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZIQTi-GOs5-CdQ0-0JfI-XO4A-PPhc-92rEKT', 'scsi-0QEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473', 'scsi-SQEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf', 'dm-uuid-LVM-2KOMfDqnadxchcrcgKh2pqnIyHgmTXEvWp5NIBI4IIH0Z87KkSOTClHBnaFxSsBv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8', 'scsi-SQEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915812 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.915817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eimw9W-MYPI-UA59-afLt-X9H7-b3VL-NmfrZ3', 'scsi-0QEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc', 'scsi-SQEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nHF385-RKkI-sMjx-wEKT-CEHl-TQSH-tknXGo', 'scsi-0QEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d', 'scsi-SQEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4', 'scsi-SQEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915867 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.915872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab', 'dm-uuid-LVM-rzWUWsHInSLRWdrp72kGd49H55Q2diyIak9DoOb0xRhEavC39dzPF5cbOf6a2zzB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1', 'dm-uuid-LVM-ETIN2cURdX3qKY8G784R8MS3Xrl7JPk1NOvKGIXLGbfYZvO5OlWQEi5VkrkwES6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-03 01:05:25.915935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TSkgdd-4u32-vhZa-Igw8-mdVc-zUOc-5kXbMO', 'scsi-0QEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361', 'scsi-SQEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g33K7x-lc71-nV2n-50c4-euam-kr43-sc7tcb', 'scsi-0QEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301', 'scsi-SQEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53', 'scsi-SQEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-03 01:05:25.915968 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.915972 | orchestrator | 2026-03-03 01:05:25.915977 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-03 01:05:25.915984 | orchestrator | Tuesday 03 March 2026 01:03:43 +0000 (0:00:00.481) 0:00:16.410 ********* 2026-03-03 01:05:25.915989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973', 'dm-uuid-LVM-GpJP3SwEqN8IRMzzg27rllwSIVirHlhSfyzZPmY6R0Kn9YDJtp0fc4Q7CuoV0X63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.915994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319', 'dm-uuid-LVM-9EM2pLoCc81f2X7Vie2gvZeoKVsOO03V8d2PXcJGe3Ps8WTrewvxmi6DdodPaJYy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916032 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd', 'dm-uuid-LVM-FzkgkoVfb2RnZHeeixvaBLUlzwoz3GmBKkIXQJrdo7uwaev79qNVS5X3yHIcAGus'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf', 'dm-uuid-LVM-2KOMfDqnadxchcrcgKh2pqnIyHgmTXEvWp5NIBI4IIH0Z87KkSOTClHBnaFxSsBv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916106 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddd2d81c-00c7-4e9a-bb31-866c5a0eeae2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916178 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab', 'dm-uuid-LVM-rzWUWsHInSLRWdrp72kGd49H55Q2diyIak9DoOb0xRhEavC39dzPF5cbOf6a2zzB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1', 'dm-uuid-LVM-ETIN2cURdX3qKY8G784R8MS3Xrl7JPk1NOvKGIXLGbfYZvO5OlWQEi5VkrkwES6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--896495c2--660d--5a75--b418--75215a0ec973-osd--block--896495c2--660d--5a75--b418--75215a0ec973'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hJi2Py-81jO-thE3-PeUa-ee3o-6IJn-t2lTlM', 'scsi-0QEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702', 'scsi-SQEMU_QEMU_HARDDISK_0c164c56-6d34-4cb4-9884-5e599fdbb702'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916224 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916236 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d486d743--7c4f--58d7--8950--e96875d5f319-osd--block--d486d743--7c4f--58d7--8950--e96875d5f319'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cZIQTi-GOs5-CdQ0-0JfI-XO4A-PPhc-92rEKT', 'scsi-0QEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473', 'scsi-SQEMU_QEMU_HARDDISK_f1b88ce7-718e-41a1-adfb-e8e019701473'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16', 'scsi-SQEMU_QEMU_HARDDISK_da145857-97b8-46c1-bd58-274a585c5d78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916254 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd-osd--block--a3b27c0a--2179--5024--9c6e--3cd3ebbe6cfd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eimw9W-MYPI-UA59-afLt-X9H7-b3VL-NmfrZ3', 'scsi-0QEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc', 'scsi-SQEMU_QEMU_HARDDISK_dcb1f927-210f-415f-93de-fe80b62d5dbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8', 'scsi-SQEMU_QEMU_HARDDISK_8acbf85b-6b93-492a-b370-4408c7f2c4d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916278 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--60a17889--adeb--5df5--a11b--dee290996ccf-osd--block--60a17889--adeb--5df5--a11b--dee290996ccf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nHF385-RKkI-sMjx-wEKT-CEHl-TQSH-tknXGo', 'scsi-0QEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d', 'scsi-SQEMU_QEMU_HARDDISK_2c5ded08-cf26-49fb-8fcb-b7f7b62b452d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4', 'scsi-SQEMU_QEMU_HARDDISK_bb2822fc-3ed5-43a4-912e-7bd302443dc4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916434 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916451 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16', 'scsi-SQEMU_QEMU_HARDDISK_f28e64b4-5f1f-4b94-8837-d9b394718ec8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916469 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f7865f1e--8b85--57a7--a15d--91986b577cab-osd--block--f7865f1e--8b85--57a7--a15d--91986b577cab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TSkgdd-4u32-vhZa-Igw8-mdVc-zUOc-5kXbMO', 'scsi-0QEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361', 'scsi-SQEMU_QEMU_HARDDISK_bba38cc5-8585-4a2f-8505-6987b8a4c361'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b901fd44--5489--5e25--a5fe--b820905f87a1-osd--block--b901fd44--5489--5e25--a5fe--b820905f87a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g33K7x-lc71-nV2n-50c4-euam-kr43-sc7tcb', 'scsi-0QEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301', 'scsi-SQEMU_QEMU_HARDDISK_307e1601-9544-4595-9bde-10bb8c02a301'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53', 'scsi-SQEMU_QEMU_HARDDISK_bf883d86-e883-4c70-9a49-1cd6f6186c53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-03-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-03 01:05:25.916494 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916498 | orchestrator | 2026-03-03 01:05:25.916502 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-03 01:05:25.916506 | orchestrator | Tuesday 03 March 2026 01:03:44 +0000 (0:00:00.429) 0:00:16.839 ********* 2026-03-03 01:05:25.916510 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.916514 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.916517 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.916521 | orchestrator | 2026-03-03 01:05:25.916525 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-03 01:05:25.916529 | orchestrator | Tuesday 03 March 2026 01:03:44 +0000 (0:00:00.617) 0:00:17.458 ********* 2026-03-03 01:05:25.916535 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.916546 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.916550 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.916554 | orchestrator | 2026-03-03 01:05:25.916558 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-03 01:05:25.916562 | orchestrator | Tuesday 03 March 2026 01:03:45 +0000 (0:00:00.408) 0:00:17.866 ********* 2026-03-03 01:05:25.916565 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.916569 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.916573 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.916577 | orchestrator | 2026-03-03 01:05:25.916581 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-03 01:05:25.916584 | orchestrator | Tuesday 03 March 2026 01:03:45 +0000 (0:00:00.626) 0:00:18.493 ********* 2026-03-03 01:05:25.916588 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916592 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916596 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916600 | orchestrator | 2026-03-03 01:05:25.916604 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-03 01:05:25.916607 | orchestrator | Tuesday 03 March 2026 01:03:46 +0000 (0:00:00.275) 0:00:18.768 ********* 2026-03-03 01:05:25.916611 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916615 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916619 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916623 | orchestrator | 2026-03-03 01:05:25.916627 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-03 01:05:25.916631 | orchestrator | Tuesday 03 March 2026 01:03:46 +0000 (0:00:00.388) 0:00:19.157 ********* 2026-03-03 01:05:25.916634 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916638 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916642 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916646 | orchestrator | 2026-03-03 01:05:25.916650 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-03 01:05:25.916653 | orchestrator | Tuesday 03 March 2026 01:03:46 +0000 (0:00:00.432) 0:00:19.590 ********* 2026-03-03 01:05:25.916657 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-03 01:05:25.916661 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-03 01:05:25.916665 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-03 01:05:25.916669 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-03 01:05:25.916673 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-03 01:05:25.916677 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-03 01:05:25.916681 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-03 01:05:25.916684 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-03 01:05:25.916688 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-03 01:05:25.916692 | orchestrator | 2026-03-03 01:05:25.916696 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-03 01:05:25.916703 | orchestrator | Tuesday 03 March 2026 01:03:47 +0000 (0:00:00.786) 0:00:20.377 ********* 2026-03-03 01:05:25.916707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-03 01:05:25.916711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-03 01:05:25.916714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-03 01:05:25.916718 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916722 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-03 01:05:25.916727 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-03 01:05:25.916730 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-03 01:05:25.916734 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-03 01:05:25.916746 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-03 01:05:25.916750 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-03 01:05:25.916754 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916758 | orchestrator | 2026-03-03 01:05:25.916761 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-03 01:05:25.916765 | orchestrator | Tuesday 03 March 2026 01:03:47 +0000 (0:00:00.313) 0:00:20.691 ********* 2026-03-03 01:05:25.916770 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:05:25.916773 | orchestrator | 2026-03-03 01:05:25.916777 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-03 01:05:25.916865 | orchestrator | Tuesday 03 March 2026 01:03:48 +0000 (0:00:00.591) 0:00:21.282 ********* 2026-03-03 01:05:25.916881 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916888 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916894 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916900 | orchestrator | 2026-03-03 01:05:25.916907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-03 01:05:25.916913 | orchestrator | Tuesday 03 March 2026 01:03:48 +0000 (0:00:00.290) 0:00:21.572 ********* 2026-03-03 01:05:25.916919 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916931 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916938 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.916944 | orchestrator | 2026-03-03 01:05:25.916952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-03 01:05:25.916956 | orchestrator | Tuesday 03 March 2026 01:03:49 +0000 (0:00:00.276) 0:00:21.849 ********* 2026-03-03 01:05:25.916988 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.916992 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.916996 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:05:25.917000 | orchestrator | 2026-03-03 01:05:25.917004 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-03 01:05:25.917011 | orchestrator | Tuesday 03 March 2026 01:03:49 +0000 (0:00:00.274) 0:00:22.123 ********* 2026-03-03 01:05:25.917016 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.917020 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.917024 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.917028 | orchestrator | 2026-03-03 01:05:25.917032 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-03 01:05:25.917036 | orchestrator | Tuesday 03 March 2026 01:03:50 +0000 (0:00:00.680) 0:00:22.804 ********* 2026-03-03 01:05:25.917040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:05:25.917044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:05:25.917047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:05:25.917051 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.917055 | orchestrator | 2026-03-03 01:05:25.917059 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-03 01:05:25.917067 | orchestrator | Tuesday 03 March 2026 01:03:50 +0000 (0:00:00.337) 0:00:23.142 ********* 2026-03-03 01:05:25.917071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:05:25.917075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:05:25.917079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:05:25.917083 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.917087 | orchestrator | 2026-03-03 01:05:25.917091 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-03 01:05:25.917094 | orchestrator | Tuesday 03 March 2026 01:03:50 +0000 (0:00:00.351) 0:00:23.493 ********* 2026-03-03 01:05:25.917098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-03 01:05:25.917102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-03 01:05:25.917106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-03 01:05:25.917109 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.917113 | orchestrator | 2026-03-03 01:05:25.917117 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-03 01:05:25.917121 | orchestrator | Tuesday 03 March 2026 01:03:51 +0000 (0:00:00.338) 0:00:23.832 ********* 2026-03-03 01:05:25.917125 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:05:25.917128 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:05:25.917132 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:05:25.917136 | orchestrator | 2026-03-03 01:05:25.917140 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-03 01:05:25.917144 | orchestrator | Tuesday 03 March 2026 01:03:51 +0000 (0:00:00.291) 0:00:24.124 ********* 2026-03-03 01:05:25.917147 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-03 01:05:25.917151 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-03 01:05:25.917155 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-03 01:05:25.917159 | orchestrator | 2026-03-03 01:05:25.917163 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-03 01:05:25.917166 | orchestrator | Tuesday 03 March 2026 01:03:51 +0000 (0:00:00.447) 0:00:24.572 ********* 2026-03-03 01:05:25.917170 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:05:25.917174 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:05:25.917178 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:05:25.917182 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-03 01:05:25.917186 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-03 01:05:25.917190 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-03 01:05:25.917194 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-03 01:05:25.917198 | orchestrator | 2026-03-03 01:05:25.917201 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-03 01:05:25.917205 | orchestrator | Tuesday 03 March 2026 01:03:52 +0000 (0:00:00.886) 0:00:25.459 ********* 2026-03-03 01:05:25.917209 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-03 01:05:25.917213 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-03 01:05:25.917217 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-03 01:05:25.917221 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-03 01:05:25.917224 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-03 01:05:25.917228 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-03 01:05:25.917235 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-03 01:05:25.917243 | orchestrator | 2026-03-03 01:05:25.917247 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-03 01:05:25.917250 | orchestrator | Tuesday 03 March 2026 01:03:54 +0000 (0:00:01.680) 0:00:27.139 ********* 2026-03-03 01:05:25.917254 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:05:25.917258 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:05:25.917264 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-03 01:05:25.917268 | orchestrator | 2026-03-03 01:05:25.917272 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-03 01:05:25.917276 | orchestrator | Tuesday 03 March 2026 01:03:54 +0000 (0:00:00.336) 0:00:27.476 ********* 2026-03-03 01:05:25.917280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:05:25.917285 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:05:25.917289 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:05:25.917294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:05:25.917319 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-03 01:05:25.917326 | orchestrator | 2026-03-03 01:05:25.917329 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-03 01:05:25.917333 | orchestrator | Tuesday 03 March 2026 01:04:37 +0000 (0:00:43.182) 0:01:10.659 ********* 2026-03-03 01:05:25.917337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917341 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917345 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917348 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917353 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917360 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917364 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-03 01:05:25.917368 | orchestrator | 2026-03-03 01:05:25.917371 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-03 01:05:25.917375 | orchestrator | Tuesday 03 March 2026 01:04:58 +0000 (0:00:20.884) 0:01:31.544 ********* 2026-03-03 01:05:25.917379 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917383 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917386 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917390 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917397 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917401 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917405 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-03 01:05:25.917408 | orchestrator | 2026-03-03 01:05:25.917412 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-03 01:05:25.917416 | orchestrator | Tuesday 03 March 2026 01:05:09 +0000 (0:00:10.525) 0:01:42.069 ********* 2026-03-03 01:05:25.917420 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917424 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:05:25.917428 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:05:25.917431 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917435 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:05:25.917442 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:05:25.917446 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917450 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:05:25.917454 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:05:25.917460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917464 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:05:25.917467 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:05:25.917471 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917475 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:05:25.917479 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:05:25.917482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-03 01:05:25.917486 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-03 01:05:25.917490 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-03 01:05:25.917494 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-03 01:05:25.917497 | orchestrator | 2026-03-03 01:05:25.917501 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:05:25.917505 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-03 01:05:25.917510 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-03 01:05:25.917515 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-03 01:05:25.917518 | orchestrator | 2026-03-03 01:05:25.917522 | orchestrator | 2026-03-03 01:05:25.917526 | orchestrator | 2026-03-03 01:05:25.917530 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:05:25.917533 | orchestrator | Tuesday 03 March 2026 01:05:25 +0000 (0:00:15.861) 0:01:57.931 ********* 2026-03-03 01:05:25.917537 | orchestrator | =============================================================================== 2026-03-03 01:05:25.917541 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.18s 2026-03-03 01:05:25.917544 | orchestrator | generate keys ---------------------------------------------------------- 20.88s 2026-03-03 01:05:25.917548 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 15.86s 2026-03-03 01:05:25.917555 | orchestrator | get keys from monitors ------------------------------------------------- 10.53s 2026-03-03 01:05:25.917559 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.99s 2026-03-03 01:05:25.917563 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.91s 2026-03-03 01:05:25.917567 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.68s 2026-03-03 01:05:25.917570 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.89s 2026-03-03 01:05:25.917574 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2026-03-03 01:05:25.917578 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.79s 2026-03-03 01:05:25.917582 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2026-03-03 01:05:25.917585 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.65s 2026-03-03 01:05:25.917589 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.63s 2026-03-03 01:05:25.917593 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2026-03-03 01:05:25.917597 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2026-03-03 01:05:25.917601 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.59s 2026-03-03 01:05:25.917604 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2026-03-03 01:05:25.917608 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.58s 2026-03-03 01:05:25.917612 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.56s 2026-03-03 01:05:25.917616 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.48s 2026-03-03 01:05:25.917620 | orchestrator | 2026-03-03 01:05:25 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:25.917623 | orchestrator | 2026-03-03 01:05:25 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:25.917627 | orchestrator | 2026-03-03 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:28.974765 | orchestrator | 2026-03-03 01:05:28 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:28.977025 | orchestrator | 2026-03-03 01:05:28 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:28.979514 | orchestrator | 2026-03-03 01:05:28 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:28.979565 | orchestrator | 2026-03-03 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:32.016797 | orchestrator | 2026-03-03 01:05:32 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:32.019189 | orchestrator | 2026-03-03 01:05:32 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:32.020868 | orchestrator | 2026-03-03 01:05:32 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:32.020986 | orchestrator | 2026-03-03 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:35.090302 | orchestrator | 2026-03-03 01:05:35 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:35.092328 | orchestrator | 2026-03-03 01:05:35 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:35.095336 | orchestrator | 2026-03-03 01:05:35 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:35.095464 | orchestrator | 2026-03-03 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:38.139173 | orchestrator | 2026-03-03 01:05:38 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:38.140249 | orchestrator | 2026-03-03 01:05:38 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:38.142798 | orchestrator | 2026-03-03 01:05:38 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:38.143113 | orchestrator | 2026-03-03 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:41.192433 | orchestrator | 2026-03-03 01:05:41 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:41.192480 | orchestrator | 2026-03-03 01:05:41 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:41.193749 | orchestrator | 2026-03-03 01:05:41 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:41.193978 | orchestrator | 2026-03-03 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:44.237274 | orchestrator | 2026-03-03 01:05:44 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:44.238052 | orchestrator | 2026-03-03 01:05:44 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:44.239651 | orchestrator | 2026-03-03 01:05:44 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:44.240719 | orchestrator | 2026-03-03 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:47.279497 | orchestrator | 2026-03-03 01:05:47 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:47.281497 | orchestrator | 2026-03-03 01:05:47 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:47.282546 | orchestrator | 2026-03-03 01:05:47 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:47.282587 | orchestrator | 2026-03-03 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:50.323308 | orchestrator | 2026-03-03 01:05:50 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:50.325422 | orchestrator | 2026-03-03 01:05:50 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:50.327762 | orchestrator | 2026-03-03 01:05:50 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:50.327800 | orchestrator | 2026-03-03 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:53.357256 | orchestrator | 2026-03-03 01:05:53 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:53.357816 | orchestrator | 2026-03-03 01:05:53 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:53.358882 | orchestrator | 2026-03-03 01:05:53 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:53.358910 | orchestrator | 2026-03-03 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:56.402140 | orchestrator | 2026-03-03 01:05:56 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:56.403297 | orchestrator | 2026-03-03 01:05:56 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:56.405709 | orchestrator | 2026-03-03 01:05:56 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:56.405755 | orchestrator | 2026-03-03 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:05:59.448527 | orchestrator | 2026-03-03 01:05:59 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state STARTED 2026-03-03 01:05:59.449564 | orchestrator | 2026-03-03 01:05:59 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:05:59.450818 | orchestrator | 2026-03-03 01:05:59 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:05:59.450858 | orchestrator | 2026-03-03 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:02.545912 | orchestrator | 2026-03-03 01:06:02 | INFO  | Task ddd7205f-a650-4f5a-b81f-65c2361e5a5d is in state SUCCESS 2026-03-03 01:06:02.547701 | orchestrator | 2026-03-03 01:06:02 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:02.551258 | orchestrator | 2026-03-03 01:06:02 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:06:02.551960 | orchestrator | 2026-03-03 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:05.612962 | orchestrator | 2026-03-03 01:06:05 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:05.615338 | orchestrator | 2026-03-03 01:06:05 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:05.617016 | orchestrator | 2026-03-03 01:06:05 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:06:05.617284 | orchestrator | 2026-03-03 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:08.663599 | orchestrator | 2026-03-03 01:06:08 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:08.665796 | orchestrator | 2026-03-03 01:06:08 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:08.667836 | orchestrator | 2026-03-03 01:06:08 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state STARTED 2026-03-03 01:06:08.668069 | orchestrator | 2026-03-03 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:11.716416 | orchestrator | 2026-03-03 01:06:11 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:11.717796 | orchestrator | 2026-03-03 01:06:11 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:11.721172 | orchestrator | 2026-03-03 01:06:11 | INFO  | Task 33ebcfa5-bbdc-41db-8180-82b0a09c8b47 is in state SUCCESS 2026-03-03 01:06:11.722617 | orchestrator | 2026-03-03 01:06:11.722682 | orchestrator | 2026-03-03 01:06:11.722690 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-03 01:06:11.722697 | orchestrator | 2026-03-03 01:06:11.722702 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-03 01:06:11.722708 | orchestrator | Tuesday 03 March 2026 01:05:29 +0000 (0:00:00.145) 0:00:00.145 ********* 2026-03-03 01:06:11.722713 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-03 01:06:11.722719 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.722724 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.722730 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:06:11.722736 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.722741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-03 01:06:11.722746 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-03 01:06:11.722751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-03 01:06:11.722756 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-03 01:06:11.722776 | orchestrator | 2026-03-03 01:06:11.722782 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-03 01:06:11.722788 | orchestrator | Tuesday 03 March 2026 01:05:33 +0000 (0:00:04.331) 0:00:04.478 ********* 2026-03-03 01:06:11.722793 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-03 01:06:11.722798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.722846 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.722851 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:06:11.722856 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.722861 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-03 01:06:11.722874 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-03 01:06:11.722880 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-03 01:06:11.722885 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-03 01:06:11.722890 | orchestrator | 2026-03-03 01:06:11.722895 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-03 01:06:11.722923 | orchestrator | Tuesday 03 March 2026 01:05:37 +0000 (0:00:03.779) 0:00:08.257 ********* 2026-03-03 01:06:11.723074 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-03 01:06:11.723083 | orchestrator | 2026-03-03 01:06:11.723089 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-03 01:06:11.723094 | orchestrator | Tuesday 03 March 2026 01:05:38 +0000 (0:00:00.935) 0:00:09.193 ********* 2026-03-03 01:06:11.723099 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-03 01:06:11.723105 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.723110 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.723115 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:06:11.723120 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.723126 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-03 01:06:11.723131 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-03 01:06:11.723136 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-03 01:06:11.723141 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-03 01:06:11.723147 | orchestrator | 2026-03-03 01:06:11.723152 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-03 01:06:11.723157 | orchestrator | Tuesday 03 March 2026 01:05:51 +0000 (0:00:13.169) 0:00:22.362 ********* 2026-03-03 01:06:11.723162 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-03 01:06:11.723167 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-03 01:06:11.723172 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-03 01:06:11.723177 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-03 01:06:11.723191 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-03 01:06:11.723202 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-03 01:06:11.723208 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-03 01:06:11.723213 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-03 01:06:11.723218 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-03 01:06:11.723223 | orchestrator | 2026-03-03 01:06:11.723228 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-03 01:06:11.723234 | orchestrator | Tuesday 03 March 2026 01:05:54 +0000 (0:00:02.746) 0:00:25.109 ********* 2026-03-03 01:06:11.723239 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-03 01:06:11.723245 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.723250 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.723255 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:06:11.723260 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-03 01:06:11.723266 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-03 01:06:11.723271 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-03 01:06:11.723276 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-03 01:06:11.723281 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-03 01:06:11.723286 | orchestrator | 2026-03-03 01:06:11.723291 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:06:11.723296 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:06:11.723301 | orchestrator | 2026-03-03 01:06:11.723307 | orchestrator | 2026-03-03 01:06:11.723312 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:06:11.723317 | orchestrator | Tuesday 03 March 2026 01:06:01 +0000 (0:00:06.770) 0:00:31.880 ********* 2026-03-03 01:06:11.723322 | orchestrator | =============================================================================== 2026-03-03 01:06:11.723327 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.17s 2026-03-03 01:06:11.723332 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.77s 2026-03-03 01:06:11.723337 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.33s 2026-03-03 01:06:11.723347 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.78s 2026-03-03 01:06:11.723352 | orchestrator | Check if target directories exist --------------------------------------- 2.75s 2026-03-03 01:06:11.723358 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2026-03-03 01:06:11.723363 | orchestrator | 2026-03-03 01:06:11.723368 | orchestrator | 2026-03-03 01:06:11.723373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:06:11.723378 | orchestrator | 2026-03-03 01:06:11.723383 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:06:11.723388 | orchestrator | Tuesday 03 March 2026 01:04:45 +0000 (0:00:00.251) 0:00:00.251 ********* 2026-03-03 01:06:11.723393 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.723399 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.723403 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.723409 | orchestrator | 2026-03-03 01:06:11.723414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:06:11.723419 | orchestrator | Tuesday 03 March 2026 01:04:45 +0000 (0:00:00.293) 0:00:00.545 ********* 2026-03-03 01:06:11.723424 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-03 01:06:11.723430 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-03 01:06:11.723439 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-03 01:06:11.723444 | orchestrator | 2026-03-03 01:06:11.723450 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-03 01:06:11.723455 | orchestrator | 2026-03-03 01:06:11.723460 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-03 01:06:11.723465 | orchestrator | Tuesday 03 March 2026 01:04:45 +0000 (0:00:00.398) 0:00:00.943 ********* 2026-03-03 01:06:11.723471 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:06:11.723476 | orchestrator | 2026-03-03 01:06:11.723481 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-03 01:06:11.723498 | orchestrator | Tuesday 03 March 2026 01:04:46 +0000 (0:00:00.444) 0:00:01.388 ********* 2026-03-03 01:06:11.723515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.723528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.723543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.723549 | orchestrator | 2026-03-03 01:06:11.723554 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-03 01:06:11.723559 | orchestrator | Tuesday 03 March 2026 01:04:47 +0000 (0:00:01.123) 0:00:02.512 ********* 2026-03-03 01:06:11.723564 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.723569 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.723574 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.723579 | orchestrator | 2026-03-03 01:06:11.723587 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-03 01:06:11.723592 | orchestrator | Tuesday 03 March 2026 01:04:47 +0000 (0:00:00.350) 0:00:02.862 ********* 2026-03-03 01:06:11.723596 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-03 01:06:11.723604 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-03 01:06:11.723609 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-03 01:06:11.723614 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-03 01:06:11.723619 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-03 01:06:11.723624 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-03 01:06:11.723629 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-03 01:06:11.723635 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-03 01:06:11.723640 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-03 01:06:11.723645 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-03 01:06:11.723650 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-03 01:06:11.723655 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-03 01:06:11.723660 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-03 01:06:11.723666 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-03 01:06:11.723671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-03 01:06:11.723676 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-03 01:06:11.723681 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-03 01:06:11.723686 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-03 01:06:11.723691 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-03 01:06:11.723696 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-03 01:06:11.723701 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-03 01:06:11.723707 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-03 01:06:11.723715 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-03 01:06:11.723720 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-03 01:06:11.723726 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-03 01:06:11.723733 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-03 01:06:11.723738 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-03 01:06:11.723744 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-03 01:06:11.723752 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-03 01:06:11.723757 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-03 01:06:11.723762 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-03 01:06:11.723767 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-03 01:06:11.723776 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-03 01:06:11.723782 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-03 01:06:11.723787 | orchestrator | 2026-03-03 01:06:11.723800 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.723806 | orchestrator | Tuesday 03 March 2026 01:04:48 +0000 (0:00:00.676) 0:00:03.539 ********* 2026-03-03 01:06:11.723811 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.723817 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.723825 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.723830 | orchestrator | 2026-03-03 01:06:11.723835 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.723841 | orchestrator | Tuesday 03 March 2026 01:04:48 +0000 (0:00:00.257) 0:00:03.796 ********* 2026-03-03 01:06:11.723846 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.723857 | orchestrator | 2026-03-03 01:06:11.723862 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.723867 | orchestrator | Tuesday 03 March 2026 01:04:48 +0000 (0:00:00.122) 0:00:03.919 ********* 2026-03-03 01:06:11.723873 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.723878 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.723883 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.723888 | orchestrator | 2026-03-03 01:06:11.723894 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.723899 | orchestrator | Tuesday 03 March 2026 01:04:49 +0000 (0:00:00.370) 0:00:04.290 ********* 2026-03-03 01:06:11.723904 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.723910 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.723915 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.723920 | orchestrator | 2026-03-03 01:06:11.723926 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.723931 | orchestrator | Tuesday 03 March 2026 01:04:49 +0000 (0:00:00.263) 0:00:04.553 ********* 2026-03-03 01:06:11.723936 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.723941 | orchestrator | 2026-03-03 01:06:11.723946 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.723952 | orchestrator | Tuesday 03 March 2026 01:04:49 +0000 (0:00:00.096) 0:00:04.650 ********* 2026-03-03 01:06:11.723957 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.723962 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.723968 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.723973 | orchestrator | 2026-03-03 01:06:11.723978 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.723983 | orchestrator | Tuesday 03 March 2026 01:04:49 +0000 (0:00:00.266) 0:00:04.917 ********* 2026-03-03 01:06:11.723989 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.723994 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.723999 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724004 | orchestrator | 2026-03-03 01:06:11.724009 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724014 | orchestrator | Tuesday 03 March 2026 01:04:50 +0000 (0:00:00.273) 0:00:05.190 ********* 2026-03-03 01:06:11.724019 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724024 | orchestrator | 2026-03-03 01:06:11.724030 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724036 | orchestrator | Tuesday 03 March 2026 01:04:50 +0000 (0:00:00.258) 0:00:05.449 ********* 2026-03-03 01:06:11.724041 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724045 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724051 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724055 | orchestrator | 2026-03-03 01:06:11.724059 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724065 | orchestrator | Tuesday 03 March 2026 01:04:50 +0000 (0:00:00.248) 0:00:05.697 ********* 2026-03-03 01:06:11.724069 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724073 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724076 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724080 | orchestrator | 2026-03-03 01:06:11.724084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724088 | orchestrator | Tuesday 03 March 2026 01:04:50 +0000 (0:00:00.281) 0:00:05.979 ********* 2026-03-03 01:06:11.724091 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724095 | orchestrator | 2026-03-03 01:06:11.724099 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724103 | orchestrator | Tuesday 03 March 2026 01:04:50 +0000 (0:00:00.111) 0:00:06.091 ********* 2026-03-03 01:06:11.724106 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724110 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724114 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724117 | orchestrator | 2026-03-03 01:06:11.724121 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724125 | orchestrator | Tuesday 03 March 2026 01:04:51 +0000 (0:00:00.277) 0:00:06.368 ********* 2026-03-03 01:06:11.724128 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724132 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724136 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724139 | orchestrator | 2026-03-03 01:06:11.724143 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724146 | orchestrator | Tuesday 03 March 2026 01:04:51 +0000 (0:00:00.496) 0:00:06.865 ********* 2026-03-03 01:06:11.724149 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724152 | orchestrator | 2026-03-03 01:06:11.724155 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724158 | orchestrator | Tuesday 03 March 2026 01:04:51 +0000 (0:00:00.147) 0:00:07.012 ********* 2026-03-03 01:06:11.724162 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724165 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724168 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724171 | orchestrator | 2026-03-03 01:06:11.724175 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724178 | orchestrator | Tuesday 03 March 2026 01:04:52 +0000 (0:00:00.289) 0:00:07.302 ********* 2026-03-03 01:06:11.724181 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724184 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724187 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724190 | orchestrator | 2026-03-03 01:06:11.724194 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724197 | orchestrator | Tuesday 03 March 2026 01:04:52 +0000 (0:00:00.309) 0:00:07.612 ********* 2026-03-03 01:06:11.724200 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724203 | orchestrator | 2026-03-03 01:06:11.724206 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724210 | orchestrator | Tuesday 03 March 2026 01:04:52 +0000 (0:00:00.132) 0:00:07.744 ********* 2026-03-03 01:06:11.724213 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724218 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724221 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724225 | orchestrator | 2026-03-03 01:06:11.724228 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724231 | orchestrator | Tuesday 03 March 2026 01:04:52 +0000 (0:00:00.325) 0:00:08.070 ********* 2026-03-03 01:06:11.724234 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724237 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724240 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724246 | orchestrator | 2026-03-03 01:06:11.724249 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724253 | orchestrator | Tuesday 03 March 2026 01:04:53 +0000 (0:00:00.532) 0:00:08.602 ********* 2026-03-03 01:06:11.724256 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724259 | orchestrator | 2026-03-03 01:06:11.724262 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724265 | orchestrator | Tuesday 03 March 2026 01:04:53 +0000 (0:00:00.123) 0:00:08.726 ********* 2026-03-03 01:06:11.724268 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724272 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724275 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724278 | orchestrator | 2026-03-03 01:06:11.724281 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724285 | orchestrator | Tuesday 03 March 2026 01:04:53 +0000 (0:00:00.299) 0:00:09.026 ********* 2026-03-03 01:06:11.724288 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724291 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724294 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724297 | orchestrator | 2026-03-03 01:06:11.724301 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724304 | orchestrator | Tuesday 03 March 2026 01:04:54 +0000 (0:00:00.341) 0:00:09.367 ********* 2026-03-03 01:06:11.724307 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724310 | orchestrator | 2026-03-03 01:06:11.724313 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724317 | orchestrator | Tuesday 03 March 2026 01:04:54 +0000 (0:00:00.135) 0:00:09.502 ********* 2026-03-03 01:06:11.724320 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724323 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724326 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724329 | orchestrator | 2026-03-03 01:06:11.724332 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724336 | orchestrator | Tuesday 03 March 2026 01:04:54 +0000 (0:00:00.492) 0:00:09.995 ********* 2026-03-03 01:06:11.724339 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724342 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724345 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724348 | orchestrator | 2026-03-03 01:06:11.724352 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724355 | orchestrator | Tuesday 03 March 2026 01:04:55 +0000 (0:00:00.375) 0:00:10.371 ********* 2026-03-03 01:06:11.724358 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724361 | orchestrator | 2026-03-03 01:06:11.724366 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724370 | orchestrator | Tuesday 03 March 2026 01:04:55 +0000 (0:00:00.147) 0:00:10.519 ********* 2026-03-03 01:06:11.724373 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724383 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724386 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724393 | orchestrator | 2026-03-03 01:06:11.724396 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-03 01:06:11.724400 | orchestrator | Tuesday 03 March 2026 01:04:55 +0000 (0:00:00.282) 0:00:10.801 ********* 2026-03-03 01:06:11.724403 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:06:11.724406 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:06:11.724409 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:06:11.724412 | orchestrator | 2026-03-03 01:06:11.724415 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-03 01:06:11.724418 | orchestrator | Tuesday 03 March 2026 01:04:55 +0000 (0:00:00.268) 0:00:11.069 ********* 2026-03-03 01:06:11.724422 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724425 | orchestrator | 2026-03-03 01:06:11.724428 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-03 01:06:11.724431 | orchestrator | Tuesday 03 March 2026 01:04:56 +0000 (0:00:00.132) 0:00:11.202 ********* 2026-03-03 01:06:11.724440 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724443 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724446 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724450 | orchestrator | 2026-03-03 01:06:11.724453 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-03 01:06:11.724456 | orchestrator | Tuesday 03 March 2026 01:04:56 +0000 (0:00:00.413) 0:00:11.615 ********* 2026-03-03 01:06:11.724459 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:06:11.724462 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:06:11.724465 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:06:11.724469 | orchestrator | 2026-03-03 01:06:11.724472 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-03 01:06:11.724475 | orchestrator | Tuesday 03 March 2026 01:04:57 +0000 (0:00:01.456) 0:00:13.071 ********* 2026-03-03 01:06:11.724478 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-03 01:06:11.724481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-03 01:06:11.724484 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-03 01:06:11.724505 | orchestrator | 2026-03-03 01:06:11.724508 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-03 01:06:11.724511 | orchestrator | Tuesday 03 March 2026 01:04:59 +0000 (0:00:01.624) 0:00:14.696 ********* 2026-03-03 01:06:11.724514 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-03 01:06:11.724520 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-03 01:06:11.724524 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-03 01:06:11.724527 | orchestrator | 2026-03-03 01:06:11.724530 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-03 01:06:11.724533 | orchestrator | Tuesday 03 March 2026 01:05:01 +0000 (0:00:02.119) 0:00:16.815 ********* 2026-03-03 01:06:11.724536 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-03 01:06:11.724539 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-03 01:06:11.724542 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-03 01:06:11.724545 | orchestrator | 2026-03-03 01:06:11.724549 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-03 01:06:11.724552 | orchestrator | Tuesday 03 March 2026 01:05:03 +0000 (0:00:01.760) 0:00:18.575 ********* 2026-03-03 01:06:11.724555 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724558 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724561 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724564 | orchestrator | 2026-03-03 01:06:11.724568 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-03 01:06:11.724571 | orchestrator | Tuesday 03 March 2026 01:05:03 +0000 (0:00:00.288) 0:00:18.864 ********* 2026-03-03 01:06:11.724574 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724577 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724580 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724583 | orchestrator | 2026-03-03 01:06:11.724586 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-03 01:06:11.724589 | orchestrator | Tuesday 03 March 2026 01:05:04 +0000 (0:00:00.279) 0:00:19.144 ********* 2026-03-03 01:06:11.724593 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:06:11.724596 | orchestrator | 2026-03-03 01:06:11.724599 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-03 01:06:11.724605 | orchestrator | Tuesday 03 March 2026 01:05:04 +0000 (0:00:00.805) 0:00:19.949 ********* 2026-03-03 01:06:11.724613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.724620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.724630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.724633 | orchestrator | 2026-03-03 01:06:11.724636 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-03 01:06:11.724642 | orchestrator | Tuesday 03 March 2026 01:05:06 +0000 (0:00:01.629) 0:00:21.579 ********* 2026-03-03 01:06:11.724647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:06:11.724653 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:06:11.724662 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:06:11.724676 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724679 | orchestrator | 2026-03-03 01:06:11.724683 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-03 01:06:11.724686 | orchestrator | Tuesday 03 March 2026 01:05:07 +0000 (0:00:00.630) 0:00:22.210 ********* 2026-03-03 01:06:11.724691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:06:11.724695 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:06:11.724707 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-03 01:06:11.724717 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724723 | orchestrator | 2026-03-03 01:06:11.724728 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-03 01:06:11.724735 | orchestrator | Tuesday 03 March 2026 01:05:07 +0000 (0:00:00.883) 0:00:23.093 ********* 2026-03-03 01:06:11.724747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.724756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.724770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-03 01:06:11.724776 | orchestrator | 2026-03-03 01:06:11.724782 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-03 01:06:11.724787 | orchestrator | Tuesday 03 March 2026 01:05:09 +0000 (0:00:01.652) 0:00:24.745 ********* 2026-03-03 01:06:11.724793 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:06:11.724799 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:06:11.724804 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:06:11.724810 | orchestrator | 2026-03-03 01:06:11.724815 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-03 01:06:11.724821 | orchestrator | Tuesday 03 March 2026 01:05:09 +0000 (0:00:00.292) 0:00:25.038 ********* 2026-03-03 01:06:11.724831 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:06:11.724836 | orchestrator | 2026-03-03 01:06:11.724842 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-03 01:06:11.724847 | orchestrator | Tuesday 03 March 2026 01:05:10 +0000 (0:00:00.649) 0:00:25.688 ********* 2026-03-03 01:06:11.724852 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:06:11.724857 | orchestrator | 2026-03-03 01:06:11.724866 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-03 01:06:11.724872 | orchestrator | Tuesday 03 March 2026 01:05:12 +0000 (0:00:02.091) 0:00:27.779 ********* 2026-03-03 01:06:11.724878 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:06:11.724884 | orchestrator | 2026-03-03 01:06:11.724890 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-03 01:06:11.724895 | orchestrator | Tuesday 03 March 2026 01:05:14 +0000 (0:00:02.099) 0:00:29.879 ********* 2026-03-03 01:06:11.724901 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:06:11.724907 | orchestrator | 2026-03-03 01:06:11.724913 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-03 01:06:11.724919 | orchestrator | Tuesday 03 March 2026 01:05:29 +0000 (0:00:14.689) 0:00:44.568 ********* 2026-03-03 01:06:11.724925 | orchestrator | 2026-03-03 01:06:11.724930 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-03 01:06:11.724936 | orchestrator | Tuesday 03 March 2026 01:05:29 +0000 (0:00:00.063) 0:00:44.632 ********* 2026-03-03 01:06:11.724942 | orchestrator | 2026-03-03 01:06:11.724948 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-03 01:06:11.724952 | orchestrator | Tuesday 03 March 2026 01:05:29 +0000 (0:00:00.062) 0:00:44.694 ********* 2026-03-03 01:06:11.724955 | orchestrator | 2026-03-03 01:06:11.724958 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-03 01:06:11.724962 | orchestrator | Tuesday 03 March 2026 01:05:29 +0000 (0:00:00.061) 0:00:44.756 ********* 2026-03-03 01:06:11.724965 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:06:11.724968 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:06:11.724971 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:06:11.724976 | orchestrator | 2026-03-03 01:06:11.724981 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:06:11.724986 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-03 01:06:11.724991 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-03 01:06:11.724997 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-03 01:06:11.725002 | orchestrator | 2026-03-03 01:06:11.725007 | orchestrator | 2026-03-03 01:06:11.725017 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:06:11.725022 | orchestrator | Tuesday 03 March 2026 01:06:09 +0000 (0:00:39.529) 0:01:24.286 ********* 2026-03-03 01:06:11.725027 | orchestrator | =============================================================================== 2026-03-03 01:06:11.725033 | orchestrator | horizon : Restart horizon container ------------------------------------ 39.53s 2026-03-03 01:06:11.725039 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.69s 2026-03-03 01:06:11.725044 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.12s 2026-03-03 01:06:11.725049 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.10s 2026-03-03 01:06:11.725056 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.09s 2026-03-03 01:06:11.725059 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.76s 2026-03-03 01:06:11.725063 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.65s 2026-03-03 01:06:11.725068 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2026-03-03 01:06:11.725073 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.62s 2026-03-03 01:06:11.725078 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.46s 2026-03-03 01:06:11.725083 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.12s 2026-03-03 01:06:11.725093 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-03-03 01:06:11.725099 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-03 01:06:11.725104 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2026-03-03 01:06:11.725109 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-03-03 01:06:11.725115 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2026-03-03 01:06:11.725120 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-03-03 01:06:11.725125 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-03-03 01:06:11.725131 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2026-03-03 01:06:11.725136 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.44s 2026-03-03 01:06:11.725141 | orchestrator | 2026-03-03 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:14.776586 | orchestrator | 2026-03-03 01:06:14 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:14.777419 | orchestrator | 2026-03-03 01:06:14 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:14.777456 | orchestrator | 2026-03-03 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:17.814473 | orchestrator | 2026-03-03 01:06:17 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:17.819440 | orchestrator | 2026-03-03 01:06:17 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:17.819686 | orchestrator | 2026-03-03 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:20.864304 | orchestrator | 2026-03-03 01:06:20 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:20.866516 | orchestrator | 2026-03-03 01:06:20 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:20.866588 | orchestrator | 2026-03-03 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:23.915047 | orchestrator | 2026-03-03 01:06:23 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:23.916750 | orchestrator | 2026-03-03 01:06:23 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:23.916802 | orchestrator | 2026-03-03 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:26.952151 | orchestrator | 2026-03-03 01:06:26 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:26.956940 | orchestrator | 2026-03-03 01:06:26 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:26.956992 | orchestrator | 2026-03-03 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:29.991692 | orchestrator | 2026-03-03 01:06:29 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:29.992225 | orchestrator | 2026-03-03 01:06:29 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:29.992270 | orchestrator | 2026-03-03 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:33.036903 | orchestrator | 2026-03-03 01:06:33 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:33.039714 | orchestrator | 2026-03-03 01:06:33 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:33.039801 | orchestrator | 2026-03-03 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:36.083901 | orchestrator | 2026-03-03 01:06:36 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:36.085697 | orchestrator | 2026-03-03 01:06:36 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:36.085753 | orchestrator | 2026-03-03 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:39.129517 | orchestrator | 2026-03-03 01:06:39 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:39.132374 | orchestrator | 2026-03-03 01:06:39 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:39.132467 | orchestrator | 2026-03-03 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:42.172399 | orchestrator | 2026-03-03 01:06:42 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:42.174454 | orchestrator | 2026-03-03 01:06:42 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:42.174545 | orchestrator | 2026-03-03 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:45.216459 | orchestrator | 2026-03-03 01:06:45 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:45.219532 | orchestrator | 2026-03-03 01:06:45 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:45.219665 | orchestrator | 2026-03-03 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:48.265418 | orchestrator | 2026-03-03 01:06:48 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:48.267002 | orchestrator | 2026-03-03 01:06:48 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:48.267061 | orchestrator | 2026-03-03 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:51.314204 | orchestrator | 2026-03-03 01:06:51 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:51.315163 | orchestrator | 2026-03-03 01:06:51 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:51.315836 | orchestrator | 2026-03-03 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:54.351973 | orchestrator | 2026-03-03 01:06:54 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:54.352044 | orchestrator | 2026-03-03 01:06:54 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:54.352053 | orchestrator | 2026-03-03 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:06:57.393443 | orchestrator | 2026-03-03 01:06:57 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:06:57.393962 | orchestrator | 2026-03-03 01:06:57 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state STARTED 2026-03-03 01:06:57.394280 | orchestrator | 2026-03-03 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:00.438975 | orchestrator | 2026-03-03 01:07:00 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:00.440843 | orchestrator | 2026-03-03 01:07:00 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:00.442635 | orchestrator | 2026-03-03 01:07:00 | INFO  | Task 83971570-029a-4048-8868-e6ab1be5a9ce is in state STARTED 2026-03-03 01:07:00.444891 | orchestrator | 2026-03-03 01:07:00 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:07:00.448634 | orchestrator | 2026-03-03 01:07:00 | INFO  | Task 4614ff2e-e206-470d-9b82-c72c052abd56 is in state SUCCESS 2026-03-03 01:07:00.448833 | orchestrator | 2026-03-03 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:03.509525 | orchestrator | 2026-03-03 01:07:03 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:03.509780 | orchestrator | 2026-03-03 01:07:03 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:03.510515 | orchestrator | 2026-03-03 01:07:03 | INFO  | Task 83971570-029a-4048-8868-e6ab1be5a9ce is in state STARTED 2026-03-03 01:07:03.511779 | orchestrator | 2026-03-03 01:07:03 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:07:03.511813 | orchestrator | 2026-03-03 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:06.541060 | orchestrator | 2026-03-03 01:07:06 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:06.541200 | orchestrator | 2026-03-03 01:07:06 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:06.542090 | orchestrator | 2026-03-03 01:07:06 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:06.542826 | orchestrator | 2026-03-03 01:07:06 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:06.543925 | orchestrator | 2026-03-03 01:07:06 | INFO  | Task 83971570-029a-4048-8868-e6ab1be5a9ce is in state SUCCESS 2026-03-03 01:07:06.544741 | orchestrator | 2026-03-03 01:07:06 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state STARTED 2026-03-03 01:07:06.544777 | orchestrator | 2026-03-03 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:09.565192 | orchestrator | 2026-03-03 01:07:09 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:09.565340 | orchestrator | 2026-03-03 01:07:09 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:09.566151 | orchestrator | 2026-03-03 01:07:09 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:09.566660 | orchestrator | 2026-03-03 01:07:09 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:09.568623 | orchestrator | 2026-03-03 01:07:09 | INFO  | Task 7eff60bd-8d2b-4995-b031-a4eeb9464d8e is in state SUCCESS 2026-03-03 01:07:09.569654 | orchestrator | 2026-03-03 01:07:09.569681 | orchestrator | 2026-03-03 01:07:09.569690 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-03 01:07:09.569709 | orchestrator | 2026-03-03 01:07:09.569715 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-03 01:07:09.569722 | orchestrator | Tuesday 03 March 2026 01:06:05 +0000 (0:00:00.222) 0:00:00.222 ********* 2026-03-03 01:07:09.569729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-03 01:07:09.569737 | orchestrator | 2026-03-03 01:07:09.569743 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-03 01:07:09.569749 | orchestrator | Tuesday 03 March 2026 01:06:06 +0000 (0:00:00.239) 0:00:00.462 ********* 2026-03-03 01:07:09.569765 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-03 01:07:09.569772 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-03 01:07:09.569779 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-03 01:07:09.569786 | orchestrator | 2026-03-03 01:07:09.569792 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-03 01:07:09.569798 | orchestrator | Tuesday 03 March 2026 01:06:07 +0000 (0:00:01.267) 0:00:01.729 ********* 2026-03-03 01:07:09.569805 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-03 01:07:09.569832 | orchestrator | 2026-03-03 01:07:09.569839 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-03 01:07:09.569846 | orchestrator | Tuesday 03 March 2026 01:06:08 +0000 (0:00:01.428) 0:00:03.158 ********* 2026-03-03 01:07:09.569852 | orchestrator | changed: [testbed-manager] 2026-03-03 01:07:09.569859 | orchestrator | 2026-03-03 01:07:09.569865 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-03 01:07:09.569872 | orchestrator | Tuesday 03 March 2026 01:06:09 +0000 (0:00:01.129) 0:00:04.288 ********* 2026-03-03 01:07:09.569878 | orchestrator | changed: [testbed-manager] 2026-03-03 01:07:09.569884 | orchestrator | 2026-03-03 01:07:09.569891 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-03 01:07:09.569897 | orchestrator | Tuesday 03 March 2026 01:06:10 +0000 (0:00:00.845) 0:00:05.134 ********* 2026-03-03 01:07:09.569903 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-03 01:07:09.569910 | orchestrator | ok: [testbed-manager] 2026-03-03 01:07:09.569916 | orchestrator | 2026-03-03 01:07:09.569922 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-03 01:07:09.569928 | orchestrator | Tuesday 03 March 2026 01:06:50 +0000 (0:00:39.704) 0:00:44.839 ********* 2026-03-03 01:07:09.569935 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-03 01:07:09.569942 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-03 01:07:09.569949 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-03 01:07:09.569955 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-03 01:07:09.569962 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-03 01:07:09.569968 | orchestrator | 2026-03-03 01:07:09.569974 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-03 01:07:09.569981 | orchestrator | Tuesday 03 March 2026 01:06:54 +0000 (0:00:03.673) 0:00:48.513 ********* 2026-03-03 01:07:09.569987 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-03 01:07:09.569994 | orchestrator | 2026-03-03 01:07:09.570000 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-03 01:07:09.570006 | orchestrator | Tuesday 03 March 2026 01:06:54 +0000 (0:00:00.427) 0:00:48.940 ********* 2026-03-03 01:07:09.570012 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:07:09.570047 | orchestrator | 2026-03-03 01:07:09.570051 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-03 01:07:09.570055 | orchestrator | Tuesday 03 March 2026 01:06:54 +0000 (0:00:00.129) 0:00:49.070 ********* 2026-03-03 01:07:09.570059 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:07:09.570063 | orchestrator | 2026-03-03 01:07:09.570067 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-03 01:07:09.570070 | orchestrator | Tuesday 03 March 2026 01:06:55 +0000 (0:00:00.418) 0:00:49.489 ********* 2026-03-03 01:07:09.570074 | orchestrator | changed: [testbed-manager] 2026-03-03 01:07:09.570078 | orchestrator | 2026-03-03 01:07:09.570082 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-03 01:07:09.570087 | orchestrator | Tuesday 03 March 2026 01:06:56 +0000 (0:00:01.254) 0:00:50.743 ********* 2026-03-03 01:07:09.570093 | orchestrator | changed: [testbed-manager] 2026-03-03 01:07:09.570101 | orchestrator | 2026-03-03 01:07:09.570108 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-03 01:07:09.570114 | orchestrator | Tuesday 03 March 2026 01:06:56 +0000 (0:00:00.619) 0:00:51.363 ********* 2026-03-03 01:07:09.570120 | orchestrator | changed: [testbed-manager] 2026-03-03 01:07:09.570125 | orchestrator | 2026-03-03 01:07:09.570132 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-03 01:07:09.570137 | orchestrator | Tuesday 03 March 2026 01:06:57 +0000 (0:00:00.559) 0:00:51.922 ********* 2026-03-03 01:07:09.570144 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-03 01:07:09.570157 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-03 01:07:09.570169 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-03 01:07:09.570175 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-03 01:07:09.570180 | orchestrator | 2026-03-03 01:07:09.570186 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:07:09.570193 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-03 01:07:09.570199 | orchestrator | 2026-03-03 01:07:09.570205 | orchestrator | 2026-03-03 01:07:09.570221 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:07:09.570227 | orchestrator | Tuesday 03 March 2026 01:06:58 +0000 (0:00:01.375) 0:00:53.298 ********* 2026-03-03 01:07:09.570234 | orchestrator | =============================================================================== 2026-03-03 01:07:09.570240 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.70s 2026-03-03 01:07:09.570247 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.67s 2026-03-03 01:07:09.570254 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.43s 2026-03-03 01:07:09.570261 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.38s 2026-03-03 01:07:09.570268 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.27s 2026-03-03 01:07:09.570279 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.25s 2026-03-03 01:07:09.570285 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.13s 2026-03-03 01:07:09.570292 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.85s 2026-03-03 01:07:09.570300 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.62s 2026-03-03 01:07:09.570307 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-03-03 01:07:09.570314 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2026-03-03 01:07:09.570321 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.42s 2026-03-03 01:07:09.570329 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-03-03 01:07:09.570335 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-03 01:07:09.570342 | orchestrator | 2026-03-03 01:07:09.570349 | orchestrator | 2026-03-03 01:07:09.570356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:07:09.570363 | orchestrator | 2026-03-03 01:07:09.570371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:07:09.570378 | orchestrator | Tuesday 03 March 2026 01:07:02 +0000 (0:00:00.137) 0:00:00.137 ********* 2026-03-03 01:07:09.570385 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.570393 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.570399 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.570407 | orchestrator | 2026-03-03 01:07:09.570414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:07:09.570421 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.225) 0:00:00.362 ********* 2026-03-03 01:07:09.570427 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-03 01:07:09.570433 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-03 01:07:09.570440 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-03 01:07:09.570447 | orchestrator | 2026-03-03 01:07:09.570454 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-03 01:07:09.570461 | orchestrator | 2026-03-03 01:07:09.570468 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-03 01:07:09.570475 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.635) 0:00:00.998 ********* 2026-03-03 01:07:09.570481 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.570487 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.570500 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.570506 | orchestrator | 2026-03-03 01:07:09.570513 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:07:09.570521 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:07:09.570528 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:07:09.570535 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:07:09.570542 | orchestrator | 2026-03-03 01:07:09.570549 | orchestrator | 2026-03-03 01:07:09.570557 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:07:09.570564 | orchestrator | Tuesday 03 March 2026 01:07:04 +0000 (0:00:00.702) 0:00:01.701 ********* 2026-03-03 01:07:09.570571 | orchestrator | =============================================================================== 2026-03-03 01:07:09.570578 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.70s 2026-03-03 01:07:09.570584 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-03 01:07:09.570592 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2026-03-03 01:07:09.570599 | orchestrator | 2026-03-03 01:07:09.570606 | orchestrator | 2026-03-03 01:07:09.570613 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:07:09.570620 | orchestrator | 2026-03-03 01:07:09.570628 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:07:09.570635 | orchestrator | Tuesday 03 March 2026 01:04:45 +0000 (0:00:00.235) 0:00:00.235 ********* 2026-03-03 01:07:09.570642 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.570649 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.570658 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.570664 | orchestrator | 2026-03-03 01:07:09.570671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:07:09.570677 | orchestrator | Tuesday 03 March 2026 01:04:45 +0000 (0:00:00.260) 0:00:00.495 ********* 2026-03-03 01:07:09.570683 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-03 01:07:09.570690 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-03 01:07:09.570708 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-03 01:07:09.570715 | orchestrator | 2026-03-03 01:07:09.570722 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-03 01:07:09.570728 | orchestrator | 2026-03-03 01:07:09.570779 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-03 01:07:09.570789 | orchestrator | Tuesday 03 March 2026 01:04:45 +0000 (0:00:00.364) 0:00:00.859 ********* 2026-03-03 01:07:09.570796 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:07:09.570802 | orchestrator | 2026-03-03 01:07:09.570810 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-03 01:07:09.570816 | orchestrator | Tuesday 03 March 2026 01:04:46 +0000 (0:00:00.498) 0:00:01.358 ********* 2026-03-03 01:07:09.570833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.570851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.570859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.570867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.570901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.570910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.570923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.570931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.570938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.570945 | orchestrator | 2026-03-03 01:07:09.570953 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-03 01:07:09.570959 | orchestrator | Tuesday 03 March 2026 01:04:47 +0000 (0:00:01.744) 0:00:03.102 ********* 2026-03-03 01:07:09.570967 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.570974 | orchestrator | 2026-03-03 01:07:09.570981 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-03 01:07:09.570989 | orchestrator | Tuesday 03 March 2026 01:04:48 +0000 (0:00:00.113) 0:00:03.216 ********* 2026-03-03 01:07:09.570996 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571003 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571009 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571016 | orchestrator | 2026-03-03 01:07:09.571022 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-03 01:07:09.571029 | orchestrator | Tuesday 03 March 2026 01:04:48 +0000 (0:00:00.357) 0:00:03.573 ********* 2026-03-03 01:07:09.571037 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:07:09.571044 | orchestrator | 2026-03-03 01:07:09.571052 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-03 01:07:09.571058 | orchestrator | Tuesday 03 March 2026 01:04:49 +0000 (0:00:00.761) 0:00:04.334 ********* 2026-03-03 01:07:09.571066 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:07:09.571073 | orchestrator | 2026-03-03 01:07:09.571080 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-03 01:07:09.571092 | orchestrator | Tuesday 03 March 2026 01:04:49 +0000 (0:00:00.462) 0:00:04.797 ********* 2026-03-03 01:07:09.571104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571175 | orchestrator | 2026-03-03 01:07:09.571181 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-03 01:07:09.571188 | orchestrator | Tuesday 03 March 2026 01:04:52 +0000 (0:00:03.004) 0:00:07.801 ********* 2026-03-03 01:07:09.571195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571232 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571259 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571298 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571305 | orchestrator | 2026-03-03 01:07:09.571309 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-03 01:07:09.571313 | orchestrator | Tuesday 03 March 2026 01:04:53 +0000 (0:00:00.614) 0:00:08.416 ********* 2026-03-03 01:07:09.571317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571333 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571355 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571388 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571394 | orchestrator | 2026-03-03 01:07:09.571401 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-03 01:07:09.571407 | orchestrator | Tuesday 03 March 2026 01:04:53 +0000 (0:00:00.764) 0:00:09.180 ********* 2026-03-03 01:07:09.571417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571497 | orchestrator | 2026-03-03 01:07:09.571504 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-03 01:07:09.571510 | orchestrator | Tuesday 03 March 2026 01:04:56 +0000 (0:00:02.780) 0:00:11.961 ********* 2026-03-03 01:07:09.571520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.571565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.571600 | orchestrator | 2026-03-03 01:07:09.571606 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-03 01:07:09.571613 | orchestrator | Tuesday 03 March 2026 01:05:01 +0000 (0:00:04.813) 0:00:16.774 ********* 2026-03-03 01:07:09.571620 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.571626 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:07:09.571633 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:07:09.571639 | orchestrator | 2026-03-03 01:07:09.571646 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-03 01:07:09.571652 | orchestrator | Tuesday 03 March 2026 01:05:02 +0000 (0:00:01.248) 0:00:18.023 ********* 2026-03-03 01:07:09.571658 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571664 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571671 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571681 | orchestrator | 2026-03-03 01:07:09.571688 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-03 01:07:09.571754 | orchestrator | Tuesday 03 March 2026 01:05:03 +0000 (0:00:00.479) 0:00:18.503 ********* 2026-03-03 01:07:09.571762 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571768 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571774 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571781 | orchestrator | 2026-03-03 01:07:09.571787 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-03 01:07:09.571793 | orchestrator | Tuesday 03 March 2026 01:05:03 +0000 (0:00:00.300) 0:00:18.804 ********* 2026-03-03 01:07:09.571800 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571804 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571808 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571811 | orchestrator | 2026-03-03 01:07:09.571815 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-03 01:07:09.571819 | orchestrator | Tuesday 03 March 2026 01:05:04 +0000 (0:00:00.508) 0:00:19.312 ********* 2026-03-03 01:07:09.571826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571857 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571885 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-03 01:07:09.571906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-03 01:07:09.571913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-03 01:07:09.571924 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571931 | orchestrator | 2026-03-03 01:07:09.571938 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-03 01:07:09.571945 | orchestrator | Tuesday 03 March 2026 01:05:04 +0000 (0:00:00.595) 0:00:19.908 ********* 2026-03-03 01:07:09.571952 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.571959 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.571966 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.571972 | orchestrator | 2026-03-03 01:07:09.571979 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-03 01:07:09.571985 | orchestrator | Tuesday 03 March 2026 01:05:05 +0000 (0:00:00.356) 0:00:20.264 ********* 2026-03-03 01:07:09.571992 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-03 01:07:09.571999 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-03 01:07:09.572005 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-03 01:07:09.572011 | orchestrator | 2026-03-03 01:07:09.572018 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-03 01:07:09.572024 | orchestrator | Tuesday 03 March 2026 01:05:06 +0000 (0:00:01.481) 0:00:21.746 ********* 2026-03-03 01:07:09.572030 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:07:09.572037 | orchestrator | 2026-03-03 01:07:09.572044 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-03 01:07:09.572050 | orchestrator | Tuesday 03 March 2026 01:05:07 +0000 (0:00:01.016) 0:00:22.763 ********* 2026-03-03 01:07:09.572057 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.572063 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.572069 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.572075 | orchestrator | 2026-03-03 01:07:09.572081 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-03 01:07:09.572087 | orchestrator | Tuesday 03 March 2026 01:05:08 +0000 (0:00:00.785) 0:00:23.548 ********* 2026-03-03 01:07:09.572093 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-03 01:07:09.572100 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-03 01:07:09.572106 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:07:09.572112 | orchestrator | 2026-03-03 01:07:09.572119 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-03 01:07:09.572125 | orchestrator | Tuesday 03 March 2026 01:05:09 +0000 (0:00:01.375) 0:00:24.924 ********* 2026-03-03 01:07:09.572132 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.572138 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.572144 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.572151 | orchestrator | 2026-03-03 01:07:09.572158 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-03 01:07:09.572164 | orchestrator | Tuesday 03 March 2026 01:05:10 +0000 (0:00:00.297) 0:00:25.221 ********* 2026-03-03 01:07:09.572171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-03 01:07:09.572178 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-03 01:07:09.572184 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-03 01:07:09.572191 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-03 01:07:09.572197 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-03 01:07:09.572209 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-03 01:07:09.572216 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-03 01:07:09.572228 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-03 01:07:09.572234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-03 01:07:09.572240 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-03 01:07:09.572249 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-03 01:07:09.572256 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-03 01:07:09.572262 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-03 01:07:09.572269 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-03 01:07:09.572275 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-03 01:07:09.572281 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-03 01:07:09.572287 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-03 01:07:09.572294 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-03 01:07:09.572300 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-03 01:07:09.572307 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-03 01:07:09.572313 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-03 01:07:09.572319 | orchestrator | 2026-03-03 01:07:09.572326 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-03 01:07:09.572332 | orchestrator | Tuesday 03 March 2026 01:05:17 +0000 (0:00:07.598) 0:00:32.819 ********* 2026-03-03 01:07:09.572338 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-03 01:07:09.572345 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-03 01:07:09.572351 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-03 01:07:09.572358 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-03 01:07:09.572364 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-03 01:07:09.572371 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-03 01:07:09.572375 | orchestrator | 2026-03-03 01:07:09.572379 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-03 01:07:09.572383 | orchestrator | Tuesday 03 March 2026 01:05:20 +0000 (0:00:02.406) 0:00:35.226 ********* 2026-03-03 01:07:09.572387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.572400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.572408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-03 01:07:09.572412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.572416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.572421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-03 01:07:09.572425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.572435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.572444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-03 01:07:09.572451 | orchestrator | 2026-03-03 01:07:09.572458 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-03 01:07:09.572464 | orchestrator | Tuesday 03 March 2026 01:05:22 +0000 (0:00:02.400) 0:00:37.626 ********* 2026-03-03 01:07:09.572471 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.572477 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.572484 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.572490 | orchestrator | 2026-03-03 01:07:09.572497 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-03 01:07:09.572502 | orchestrator | Tuesday 03 March 2026 01:05:22 +0000 (0:00:00.235) 0:00:37.862 ********* 2026-03-03 01:07:09.572509 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572516 | orchestrator | 2026-03-03 01:07:09.572522 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-03 01:07:09.572529 | orchestrator | Tuesday 03 March 2026 01:05:25 +0000 (0:00:02.566) 0:00:40.429 ********* 2026-03-03 01:07:09.572535 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572541 | orchestrator | 2026-03-03 01:07:09.572548 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-03 01:07:09.572554 | orchestrator | Tuesday 03 March 2026 01:05:27 +0000 (0:00:02.056) 0:00:42.485 ********* 2026-03-03 01:07:09.572560 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.572566 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.572573 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.572579 | orchestrator | 2026-03-03 01:07:09.572584 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-03 01:07:09.572588 | orchestrator | Tuesday 03 March 2026 01:05:28 +0000 (0:00:00.979) 0:00:43.465 ********* 2026-03-03 01:07:09.572592 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.572596 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.572600 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.572604 | orchestrator | 2026-03-03 01:07:09.572607 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-03 01:07:09.572611 | orchestrator | Tuesday 03 March 2026 01:05:28 +0000 (0:00:00.288) 0:00:43.753 ********* 2026-03-03 01:07:09.572615 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.572622 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.572626 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.572630 | orchestrator | 2026-03-03 01:07:09.572634 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-03 01:07:09.572638 | orchestrator | Tuesday 03 March 2026 01:05:28 +0000 (0:00:00.276) 0:00:44.029 ********* 2026-03-03 01:07:09.572641 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572645 | orchestrator | 2026-03-03 01:07:09.572649 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-03 01:07:09.572653 | orchestrator | Tuesday 03 March 2026 01:05:41 +0000 (0:00:12.367) 0:00:56.396 ********* 2026-03-03 01:07:09.572657 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572661 | orchestrator | 2026-03-03 01:07:09.572664 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-03 01:07:09.572668 | orchestrator | Tuesday 03 March 2026 01:05:51 +0000 (0:00:10.071) 0:01:06.468 ********* 2026-03-03 01:07:09.572672 | orchestrator | 2026-03-03 01:07:09.572676 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-03 01:07:09.572679 | orchestrator | Tuesday 03 March 2026 01:05:51 +0000 (0:00:00.070) 0:01:06.539 ********* 2026-03-03 01:07:09.572683 | orchestrator | 2026-03-03 01:07:09.572687 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-03 01:07:09.572691 | orchestrator | Tuesday 03 March 2026 01:05:51 +0000 (0:00:00.061) 0:01:06.600 ********* 2026-03-03 01:07:09.572725 | orchestrator | 2026-03-03 01:07:09.572732 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-03 01:07:09.572739 | orchestrator | Tuesday 03 March 2026 01:05:51 +0000 (0:00:00.059) 0:01:06.660 ********* 2026-03-03 01:07:09.572743 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572747 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:07:09.572751 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:07:09.572755 | orchestrator | 2026-03-03 01:07:09.572759 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-03 01:07:09.572763 | orchestrator | Tuesday 03 March 2026 01:05:59 +0000 (0:00:07.812) 0:01:14.473 ********* 2026-03-03 01:07:09.572767 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572771 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:07:09.572774 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:07:09.572778 | orchestrator | 2026-03-03 01:07:09.572782 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-03 01:07:09.572786 | orchestrator | Tuesday 03 March 2026 01:06:08 +0000 (0:00:09.715) 0:01:24.189 ********* 2026-03-03 01:07:09.572793 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572797 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:07:09.572801 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:07:09.572804 | orchestrator | 2026-03-03 01:07:09.572808 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-03 01:07:09.572812 | orchestrator | Tuesday 03 March 2026 01:06:14 +0000 (0:00:05.246) 0:01:29.435 ********* 2026-03-03 01:07:09.572816 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:07:09.572820 | orchestrator | 2026-03-03 01:07:09.572824 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-03 01:07:09.572827 | orchestrator | Tuesday 03 March 2026 01:06:14 +0000 (0:00:00.630) 0:01:30.066 ********* 2026-03-03 01:07:09.572831 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:07:09.572838 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.572842 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:07:09.572846 | orchestrator | 2026-03-03 01:07:09.572850 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-03 01:07:09.572854 | orchestrator | Tuesday 03 March 2026 01:06:15 +0000 (0:00:00.747) 0:01:30.813 ********* 2026-03-03 01:07:09.572857 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:07:09.572862 | orchestrator | 2026-03-03 01:07:09.572868 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-03 01:07:09.572878 | orchestrator | Tuesday 03 March 2026 01:06:17 +0000 (0:00:01.444) 0:01:32.258 ********* 2026-03-03 01:07:09.572885 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-03 01:07:09.572891 | orchestrator | 2026-03-03 01:07:09.572898 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-03 01:07:09.572904 | orchestrator | Tuesday 03 March 2026 01:06:27 +0000 (0:00:10.523) 0:01:42.782 ********* 2026-03-03 01:07:09.572910 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-03 01:07:09.572914 | orchestrator | 2026-03-03 01:07:09.572917 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-03 01:07:09.572922 | orchestrator | Tuesday 03 March 2026 01:06:56 +0000 (0:00:29.135) 0:02:11.918 ********* 2026-03-03 01:07:09.572925 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-03 01:07:09.572929 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-03 01:07:09.572933 | orchestrator | 2026-03-03 01:07:09.572937 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-03 01:07:09.572941 | orchestrator | Tuesday 03 March 2026 01:07:02 +0000 (0:00:06.037) 0:02:17.956 ********* 2026-03-03 01:07:09.572945 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.572949 | orchestrator | 2026-03-03 01:07:09.572953 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-03 01:07:09.572956 | orchestrator | Tuesday 03 March 2026 01:07:02 +0000 (0:00:00.102) 0:02:18.059 ********* 2026-03-03 01:07:09.572960 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.572964 | orchestrator | 2026-03-03 01:07:09.572968 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-03 01:07:09.572972 | orchestrator | Tuesday 03 March 2026 01:07:02 +0000 (0:00:00.099) 0:02:18.158 ********* 2026-03-03 01:07:09.572977 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.572983 | orchestrator | 2026-03-03 01:07:09.572989 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-03 01:07:09.572995 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.130) 0:02:18.288 ********* 2026-03-03 01:07:09.573001 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.573008 | orchestrator | 2026-03-03 01:07:09.573014 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-03 01:07:09.573020 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.526) 0:02:18.815 ********* 2026-03-03 01:07:09.573026 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:07:09.573032 | orchestrator | 2026-03-03 01:07:09.573039 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-03 01:07:09.573045 | orchestrator | Tuesday 03 March 2026 01:07:07 +0000 (0:00:03.991) 0:02:22.806 ********* 2026-03-03 01:07:09.573052 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:07:09.573058 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:07:09.573065 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:07:09.573071 | orchestrator | 2026-03-03 01:07:09.573078 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:07:09.573084 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-03 01:07:09.573092 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-03 01:07:09.573099 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-03 01:07:09.573105 | orchestrator | 2026-03-03 01:07:09.573112 | orchestrator | 2026-03-03 01:07:09.573118 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:07:09.573125 | orchestrator | Tuesday 03 March 2026 01:07:07 +0000 (0:00:00.383) 0:02:23.189 ********* 2026-03-03 01:07:09.573137 | orchestrator | =============================================================================== 2026-03-03 01:07:09.573141 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.14s 2026-03-03 01:07:09.573145 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.37s 2026-03-03 01:07:09.573149 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.52s 2026-03-03 01:07:09.573153 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.07s 2026-03-03 01:07:09.573160 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.72s 2026-03-03 01:07:09.573164 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 7.81s 2026-03-03 01:07:09.573168 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.60s 2026-03-03 01:07:09.573172 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.04s 2026-03-03 01:07:09.573176 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.25s 2026-03-03 01:07:09.573180 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.81s 2026-03-03 01:07:09.573183 | orchestrator | keystone : Creating default user role ----------------------------------- 3.99s 2026-03-03 01:07:09.573188 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.00s 2026-03-03 01:07:09.573192 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.78s 2026-03-03 01:07:09.573196 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.57s 2026-03-03 01:07:09.573199 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.41s 2026-03-03 01:07:09.573203 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.40s 2026-03-03 01:07:09.573207 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.06s 2026-03-03 01:07:09.573211 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.74s 2026-03-03 01:07:09.573214 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.48s 2026-03-03 01:07:09.573218 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.44s 2026-03-03 01:07:09.573222 | orchestrator | 2026-03-03 01:07:09 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:09.573226 | orchestrator | 2026-03-03 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:12.615116 | orchestrator | 2026-03-03 01:07:12 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:12.615217 | orchestrator | 2026-03-03 01:07:12 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:12.615638 | orchestrator | 2026-03-03 01:07:12 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:12.616264 | orchestrator | 2026-03-03 01:07:12 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:12.616597 | orchestrator | 2026-03-03 01:07:12 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:12.616628 | orchestrator | 2026-03-03 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:15.646397 | orchestrator | 2026-03-03 01:07:15 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:15.649097 | orchestrator | 2026-03-03 01:07:15 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:15.649260 | orchestrator | 2026-03-03 01:07:15 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:15.651565 | orchestrator | 2026-03-03 01:07:15 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:15.652818 | orchestrator | 2026-03-03 01:07:15 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:15.652955 | orchestrator | 2026-03-03 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:18.689780 | orchestrator | 2026-03-03 01:07:18 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:18.690618 | orchestrator | 2026-03-03 01:07:18 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:18.691691 | orchestrator | 2026-03-03 01:07:18 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:18.692493 | orchestrator | 2026-03-03 01:07:18 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:18.692956 | orchestrator | 2026-03-03 01:07:18 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:18.693068 | orchestrator | 2026-03-03 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:21.728477 | orchestrator | 2026-03-03 01:07:21 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:21.728665 | orchestrator | 2026-03-03 01:07:21 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:21.731242 | orchestrator | 2026-03-03 01:07:21 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:21.731883 | orchestrator | 2026-03-03 01:07:21 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:21.732473 | orchestrator | 2026-03-03 01:07:21 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:21.732494 | orchestrator | 2026-03-03 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:24.766952 | orchestrator | 2026-03-03 01:07:24 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:24.767220 | orchestrator | 2026-03-03 01:07:24 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:24.767948 | orchestrator | 2026-03-03 01:07:24 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:24.769040 | orchestrator | 2026-03-03 01:07:24 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:24.769519 | orchestrator | 2026-03-03 01:07:24 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:24.769546 | orchestrator | 2026-03-03 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:27.810239 | orchestrator | 2026-03-03 01:07:27 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:27.810893 | orchestrator | 2026-03-03 01:07:27 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:27.811983 | orchestrator | 2026-03-03 01:07:27 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:27.813036 | orchestrator | 2026-03-03 01:07:27 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:27.814134 | orchestrator | 2026-03-03 01:07:27 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:27.814296 | orchestrator | 2026-03-03 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:30.854116 | orchestrator | 2026-03-03 01:07:30 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:30.854448 | orchestrator | 2026-03-03 01:07:30 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:30.856957 | orchestrator | 2026-03-03 01:07:30 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:30.857375 | orchestrator | 2026-03-03 01:07:30 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:30.858179 | orchestrator | 2026-03-03 01:07:30 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:30.858223 | orchestrator | 2026-03-03 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:33.886962 | orchestrator | 2026-03-03 01:07:33 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:33.887019 | orchestrator | 2026-03-03 01:07:33 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:33.887028 | orchestrator | 2026-03-03 01:07:33 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:33.889705 | orchestrator | 2026-03-03 01:07:33 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:33.889755 | orchestrator | 2026-03-03 01:07:33 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:33.889763 | orchestrator | 2026-03-03 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:37.123874 | orchestrator | 2026-03-03 01:07:36 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:37.123956 | orchestrator | 2026-03-03 01:07:36 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:37.123967 | orchestrator | 2026-03-03 01:07:36 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:37.123975 | orchestrator | 2026-03-03 01:07:36 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:37.123980 | orchestrator | 2026-03-03 01:07:36 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:37.123987 | orchestrator | 2026-03-03 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:39.946383 | orchestrator | 2026-03-03 01:07:39 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:39.947357 | orchestrator | 2026-03-03 01:07:39 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:39.947388 | orchestrator | 2026-03-03 01:07:39 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:39.947991 | orchestrator | 2026-03-03 01:07:39 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state STARTED 2026-03-03 01:07:39.948464 | orchestrator | 2026-03-03 01:07:39 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:39.948545 | orchestrator | 2026-03-03 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:42.985410 | orchestrator | 2026-03-03 01:07:42 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:42.985503 | orchestrator | 2026-03-03 01:07:42 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:42.986253 | orchestrator | 2026-03-03 01:07:42 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:42.986788 | orchestrator | 2026-03-03 01:07:42 | INFO  | Task a97ffedf-633b-4a20-93ba-4d27813414bb is in state SUCCESS 2026-03-03 01:07:42.987187 | orchestrator | 2026-03-03 01:07:42 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:42.987210 | orchestrator | 2026-03-03 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:46.039209 | orchestrator | 2026-03-03 01:07:46 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:46.039325 | orchestrator | 2026-03-03 01:07:46 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:07:46.039338 | orchestrator | 2026-03-03 01:07:46 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:46.039360 | orchestrator | 2026-03-03 01:07:46 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:46.039369 | orchestrator | 2026-03-03 01:07:46 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:46.039385 | orchestrator | 2026-03-03 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:49.052956 | orchestrator | 2026-03-03 01:07:49 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:49.053989 | orchestrator | 2026-03-03 01:07:49 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:07:49.054108 | orchestrator | 2026-03-03 01:07:49 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:49.054750 | orchestrator | 2026-03-03 01:07:49 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:49.055591 | orchestrator | 2026-03-03 01:07:49 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:49.055638 | orchestrator | 2026-03-03 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:52.090201 | orchestrator | 2026-03-03 01:07:52 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:52.092389 | orchestrator | 2026-03-03 01:07:52 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:07:52.092526 | orchestrator | 2026-03-03 01:07:52 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:52.094305 | orchestrator | 2026-03-03 01:07:52 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:52.095631 | orchestrator | 2026-03-03 01:07:52 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:52.095680 | orchestrator | 2026-03-03 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:55.122453 | orchestrator | 2026-03-03 01:07:55 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:55.122805 | orchestrator | 2026-03-03 01:07:55 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:07:55.123801 | orchestrator | 2026-03-03 01:07:55 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:55.124464 | orchestrator | 2026-03-03 01:07:55 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:55.125132 | orchestrator | 2026-03-03 01:07:55 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:55.125156 | orchestrator | 2026-03-03 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:07:58.205387 | orchestrator | 2026-03-03 01:07:58 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:07:58.205935 | orchestrator | 2026-03-03 01:07:58 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:07:58.206597 | orchestrator | 2026-03-03 01:07:58 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:07:58.207490 | orchestrator | 2026-03-03 01:07:58 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:07:58.208162 | orchestrator | 2026-03-03 01:07:58 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:07:58.209056 | orchestrator | 2026-03-03 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:01.237751 | orchestrator | 2026-03-03 01:08:01 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:01.239584 | orchestrator | 2026-03-03 01:08:01 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:01.240204 | orchestrator | 2026-03-03 01:08:01 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:01.241024 | orchestrator | 2026-03-03 01:08:01 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:01.242005 | orchestrator | 2026-03-03 01:08:01 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:01.242052 | orchestrator | 2026-03-03 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:04.266168 | orchestrator | 2026-03-03 01:08:04 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:04.266798 | orchestrator | 2026-03-03 01:08:04 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:04.267613 | orchestrator | 2026-03-03 01:08:04 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:04.268988 | orchestrator | 2026-03-03 01:08:04 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:04.269478 | orchestrator | 2026-03-03 01:08:04 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:04.269829 | orchestrator | 2026-03-03 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:07.297720 | orchestrator | 2026-03-03 01:08:07 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:07.298401 | orchestrator | 2026-03-03 01:08:07 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:07.299161 | orchestrator | 2026-03-03 01:08:07 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:07.299978 | orchestrator | 2026-03-03 01:08:07 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:07.300754 | orchestrator | 2026-03-03 01:08:07 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:07.300932 | orchestrator | 2026-03-03 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:10.326937 | orchestrator | 2026-03-03 01:08:10 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:10.327432 | orchestrator | 2026-03-03 01:08:10 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:10.328011 | orchestrator | 2026-03-03 01:08:10 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:10.328582 | orchestrator | 2026-03-03 01:08:10 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:10.329218 | orchestrator | 2026-03-03 01:08:10 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:10.329410 | orchestrator | 2026-03-03 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:13.352856 | orchestrator | 2026-03-03 01:08:13 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:13.353360 | orchestrator | 2026-03-03 01:08:13 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:13.354180 | orchestrator | 2026-03-03 01:08:13 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:13.354809 | orchestrator | 2026-03-03 01:08:13 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:13.355496 | orchestrator | 2026-03-03 01:08:13 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:13.355631 | orchestrator | 2026-03-03 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:16.382860 | orchestrator | 2026-03-03 01:08:16 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:16.383164 | orchestrator | 2026-03-03 01:08:16 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:16.384025 | orchestrator | 2026-03-03 01:08:16 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:16.385548 | orchestrator | 2026-03-03 01:08:16 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:16.386328 | orchestrator | 2026-03-03 01:08:16 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:16.386360 | orchestrator | 2026-03-03 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:19.406304 | orchestrator | 2026-03-03 01:08:19 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:19.406840 | orchestrator | 2026-03-03 01:08:19 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:19.407523 | orchestrator | 2026-03-03 01:08:19 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:19.408191 | orchestrator | 2026-03-03 01:08:19 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:19.408960 | orchestrator | 2026-03-03 01:08:19 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:19.409044 | orchestrator | 2026-03-03 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:22.428632 | orchestrator | 2026-03-03 01:08:22 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:22.429170 | orchestrator | 2026-03-03 01:08:22 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:22.430158 | orchestrator | 2026-03-03 01:08:22 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:22.430716 | orchestrator | 2026-03-03 01:08:22 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:22.431662 | orchestrator | 2026-03-03 01:08:22 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:22.431691 | orchestrator | 2026-03-03 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:25.463582 | orchestrator | 2026-03-03 01:08:25 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:25.463633 | orchestrator | 2026-03-03 01:08:25 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:25.464782 | orchestrator | 2026-03-03 01:08:25 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:25.465425 | orchestrator | 2026-03-03 01:08:25 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:25.466595 | orchestrator | 2026-03-03 01:08:25 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:25.466637 | orchestrator | 2026-03-03 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:28.493280 | orchestrator | 2026-03-03 01:08:28 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:28.493608 | orchestrator | 2026-03-03 01:08:28 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:28.494359 | orchestrator | 2026-03-03 01:08:28 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:28.494989 | orchestrator | 2026-03-03 01:08:28 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:28.495590 | orchestrator | 2026-03-03 01:08:28 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:28.495617 | orchestrator | 2026-03-03 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:31.516615 | orchestrator | 2026-03-03 01:08:31 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:31.517232 | orchestrator | 2026-03-03 01:08:31 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:31.519146 | orchestrator | 2026-03-03 01:08:31 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:31.519819 | orchestrator | 2026-03-03 01:08:31 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:31.520566 | orchestrator | 2026-03-03 01:08:31 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:31.520694 | orchestrator | 2026-03-03 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:34.569463 | orchestrator | 2026-03-03 01:08:34 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:34.572271 | orchestrator | 2026-03-03 01:08:34 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:34.577421 | orchestrator | 2026-03-03 01:08:34 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:34.579558 | orchestrator | 2026-03-03 01:08:34 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state STARTED 2026-03-03 01:08:34.580502 | orchestrator | 2026-03-03 01:08:34 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:34.580607 | orchestrator | 2026-03-03 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:37.607372 | orchestrator | 2026-03-03 01:08:37 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:37.607693 | orchestrator | 2026-03-03 01:08:37 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:37.608369 | orchestrator | 2026-03-03 01:08:37 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:37.608992 | orchestrator | 2026-03-03 01:08:37 | INFO  | Task b534dfe8-8896-4935-8547-87b3841c5596 is in state SUCCESS 2026-03-03 01:08:37.609285 | orchestrator | 2026-03-03 01:08:37.609303 | orchestrator | 2026-03-03 01:08:37.609311 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:08:37.609318 | orchestrator | 2026-03-03 01:08:37.609324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:08:37.609330 | orchestrator | Tuesday 03 March 2026 01:07:10 +0000 (0:00:00.435) 0:00:00.435 ********* 2026-03-03 01:08:37.609336 | orchestrator | ok: [testbed-manager] 2026-03-03 01:08:37.609343 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:08:37.609349 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:08:37.609354 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:08:37.609360 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:08:37.609366 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:08:37.609371 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:08:37.609377 | orchestrator | 2026-03-03 01:08:37.609383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:08:37.609388 | orchestrator | Tuesday 03 March 2026 01:07:11 +0000 (0:00:01.103) 0:00:01.538 ********* 2026-03-03 01:08:37.609411 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609418 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609424 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609430 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609435 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609441 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609446 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-03 01:08:37.609452 | orchestrator | 2026-03-03 01:08:37.609458 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-03 01:08:37.609463 | orchestrator | 2026-03-03 01:08:37.609469 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-03 01:08:37.609474 | orchestrator | Tuesday 03 March 2026 01:07:13 +0000 (0:00:01.353) 0:00:02.892 ********* 2026-03-03 01:08:37.609480 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:08:37.609487 | orchestrator | 2026-03-03 01:08:37.609492 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-03 01:08:37.609498 | orchestrator | Tuesday 03 March 2026 01:07:14 +0000 (0:00:01.505) 0:00:04.397 ********* 2026-03-03 01:08:37.609503 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-03 01:08:37.609509 | orchestrator | 2026-03-03 01:08:37.609515 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-03 01:08:37.609521 | orchestrator | Tuesday 03 March 2026 01:07:18 +0000 (0:00:03.599) 0:00:07.997 ********* 2026-03-03 01:08:37.609526 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-03 01:08:37.609533 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-03 01:08:37.609539 | orchestrator | 2026-03-03 01:08:37.609544 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-03 01:08:37.609550 | orchestrator | Tuesday 03 March 2026 01:07:24 +0000 (0:00:06.646) 0:00:14.644 ********* 2026-03-03 01:08:37.609555 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-03 01:08:37.609560 | orchestrator | 2026-03-03 01:08:37.609566 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-03 01:08:37.609572 | orchestrator | Tuesday 03 March 2026 01:07:27 +0000 (0:00:02.949) 0:00:17.593 ********* 2026-03-03 01:08:37.609578 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-03 01:08:37.609583 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:08:37.609589 | orchestrator | 2026-03-03 01:08:37.609595 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-03 01:08:37.609601 | orchestrator | Tuesday 03 March 2026 01:07:31 +0000 (0:00:04.081) 0:00:21.675 ********* 2026-03-03 01:08:37.609606 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-03 01:08:37.609612 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-03 01:08:37.609618 | orchestrator | 2026-03-03 01:08:37.609623 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-03 01:08:37.609629 | orchestrator | Tuesday 03 March 2026 01:07:37 +0000 (0:00:05.775) 0:00:27.450 ********* 2026-03-03 01:08:37.609635 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-03 01:08:37.609640 | orchestrator | 2026-03-03 01:08:37.609646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:08:37.609652 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609657 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609667 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609681 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609687 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609700 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609706 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.609712 | orchestrator | 2026-03-03 01:08:37.609718 | orchestrator | 2026-03-03 01:08:37.609724 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:08:37.609730 | orchestrator | Tuesday 03 March 2026 01:07:42 +0000 (0:00:05.005) 0:00:32.456 ********* 2026-03-03 01:08:37.609736 | orchestrator | =============================================================================== 2026-03-03 01:08:37.609742 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.65s 2026-03-03 01:08:37.609747 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.78s 2026-03-03 01:08:37.609753 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.01s 2026-03-03 01:08:37.609759 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.08s 2026-03-03 01:08:37.609764 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.60s 2026-03-03 01:08:37.609770 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.95s 2026-03-03 01:08:37.609775 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.51s 2026-03-03 01:08:37.609781 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-03-03 01:08:37.609786 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2026-03-03 01:08:37.609792 | orchestrator | 2026-03-03 01:08:37.609798 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-03 01:08:37.609804 | orchestrator | 2.16.14 2026-03-03 01:08:37.609809 | orchestrator | 2026-03-03 01:08:37.609815 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-03 01:08:37.609820 | orchestrator | 2026-03-03 01:08:37.609825 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-03 01:08:37.609831 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.203) 0:00:00.203 ********* 2026-03-03 01:08:37.609836 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.609841 | orchestrator | 2026-03-03 01:08:37.609846 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-03 01:08:37.609851 | orchestrator | Tuesday 03 March 2026 01:07:04 +0000 (0:00:01.210) 0:00:01.414 ********* 2026-03-03 01:08:37.609857 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.609862 | orchestrator | 2026-03-03 01:08:37.609867 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-03 01:08:37.609873 | orchestrator | Tuesday 03 March 2026 01:07:05 +0000 (0:00:00.859) 0:00:02.273 ********* 2026-03-03 01:08:37.609879 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.609884 | orchestrator | 2026-03-03 01:08:37.610070 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-03 01:08:37.610085 | orchestrator | Tuesday 03 March 2026 01:07:06 +0000 (0:00:00.930) 0:00:03.204 ********* 2026-03-03 01:08:37.610091 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.610097 | orchestrator | 2026-03-03 01:08:37.610103 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-03 01:08:37.610109 | orchestrator | Tuesday 03 March 2026 01:07:07 +0000 (0:00:01.210) 0:00:04.414 ********* 2026-03-03 01:08:37.610119 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.610126 | orchestrator | 2026-03-03 01:08:37.610132 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-03 01:08:37.610137 | orchestrator | Tuesday 03 March 2026 01:07:08 +0000 (0:00:01.361) 0:00:05.776 ********* 2026-03-03 01:08:37.610143 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.610149 | orchestrator | 2026-03-03 01:08:37.610154 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-03 01:08:37.610160 | orchestrator | Tuesday 03 March 2026 01:07:09 +0000 (0:00:00.896) 0:00:06.673 ********* 2026-03-03 01:08:37.610165 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.610171 | orchestrator | 2026-03-03 01:08:37.610176 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-03 01:08:37.610182 | orchestrator | Tuesday 03 March 2026 01:07:10 +0000 (0:00:01.072) 0:00:07.745 ********* 2026-03-03 01:08:37.610187 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.610193 | orchestrator | 2026-03-03 01:08:37.610199 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-03 01:08:37.610205 | orchestrator | Tuesday 03 March 2026 01:07:11 +0000 (0:00:00.916) 0:00:08.661 ********* 2026-03-03 01:08:37.610210 | orchestrator | changed: [testbed-manager] 2026-03-03 01:08:37.610216 | orchestrator | 2026-03-03 01:08:37.610222 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-03 01:08:37.610227 | orchestrator | Tuesday 03 March 2026 01:08:11 +0000 (0:01:00.055) 0:01:08.717 ********* 2026-03-03 01:08:37.610232 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:08:37.610238 | orchestrator | 2026-03-03 01:08:37.610244 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-03 01:08:37.610250 | orchestrator | 2026-03-03 01:08:37.610256 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-03 01:08:37.610261 | orchestrator | Tuesday 03 March 2026 01:08:11 +0000 (0:00:00.130) 0:01:08.847 ********* 2026-03-03 01:08:37.610267 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:08:37.610272 | orchestrator | 2026-03-03 01:08:37.610278 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-03 01:08:37.610284 | orchestrator | 2026-03-03 01:08:37.610294 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-03 01:08:37.610300 | orchestrator | Tuesday 03 March 2026 01:08:23 +0000 (0:00:11.358) 0:01:20.206 ********* 2026-03-03 01:08:37.610306 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:08:37.610312 | orchestrator | 2026-03-03 01:08:37.610317 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-03 01:08:37.610323 | orchestrator | 2026-03-03 01:08:37.610385 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-03 01:08:37.610402 | orchestrator | Tuesday 03 March 2026 01:08:24 +0000 (0:00:01.105) 0:01:21.311 ********* 2026-03-03 01:08:37.610409 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:08:37.610414 | orchestrator | 2026-03-03 01:08:37.610421 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:08:37.610427 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-03 01:08:37.610435 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.610441 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.610448 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:08:37.610454 | orchestrator | 2026-03-03 01:08:37.610460 | orchestrator | 2026-03-03 01:08:37.610467 | orchestrator | 2026-03-03 01:08:37.610473 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:08:37.610483 | orchestrator | Tuesday 03 March 2026 01:08:35 +0000 (0:00:11.365) 0:01:32.676 ********* 2026-03-03 01:08:37.610490 | orchestrator | =============================================================================== 2026-03-03 01:08:37.610495 | orchestrator | Create admin user ------------------------------------------------------ 60.06s 2026-03-03 01:08:37.610501 | orchestrator | Restart ceph manager service ------------------------------------------- 23.83s 2026-03-03 01:08:37.610506 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.36s 2026-03-03 01:08:37.610512 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.21s 2026-03-03 01:08:37.610518 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.21s 2026-03-03 01:08:37.610524 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.07s 2026-03-03 01:08:37.610530 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.93s 2026-03-03 01:08:37.610536 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.92s 2026-03-03 01:08:37.610542 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2026-03-03 01:08:37.610549 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.86s 2026-03-03 01:08:37.610555 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-03-03 01:08:37.610561 | orchestrator | 2026-03-03 01:08:37 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:37.610568 | orchestrator | 2026-03-03 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:40.649382 | orchestrator | 2026-03-03 01:08:40 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:40.650764 | orchestrator | 2026-03-03 01:08:40 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:40.652387 | orchestrator | 2026-03-03 01:08:40 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:40.654746 | orchestrator | 2026-03-03 01:08:40 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:40.654839 | orchestrator | 2026-03-03 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:43.709818 | orchestrator | 2026-03-03 01:08:43 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:43.711250 | orchestrator | 2026-03-03 01:08:43 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:43.711775 | orchestrator | 2026-03-03 01:08:43 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:43.712512 | orchestrator | 2026-03-03 01:08:43 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:43.712537 | orchestrator | 2026-03-03 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:46.742601 | orchestrator | 2026-03-03 01:08:46 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:46.743377 | orchestrator | 2026-03-03 01:08:46 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:46.744771 | orchestrator | 2026-03-03 01:08:46 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:46.747797 | orchestrator | 2026-03-03 01:08:46 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:46.747840 | orchestrator | 2026-03-03 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:49.810846 | orchestrator | 2026-03-03 01:08:49 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:49.812533 | orchestrator | 2026-03-03 01:08:49 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:49.815722 | orchestrator | 2026-03-03 01:08:49 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:49.815900 | orchestrator | 2026-03-03 01:08:49 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:49.815910 | orchestrator | 2026-03-03 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:52.849482 | orchestrator | 2026-03-03 01:08:52 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:52.850191 | orchestrator | 2026-03-03 01:08:52 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:52.851021 | orchestrator | 2026-03-03 01:08:52 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:52.851677 | orchestrator | 2026-03-03 01:08:52 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:52.851700 | orchestrator | 2026-03-03 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:55.894620 | orchestrator | 2026-03-03 01:08:55 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:55.894802 | orchestrator | 2026-03-03 01:08:55 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:55.895714 | orchestrator | 2026-03-03 01:08:55 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:55.896593 | orchestrator | 2026-03-03 01:08:55 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:55.896614 | orchestrator | 2026-03-03 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:08:58.929684 | orchestrator | 2026-03-03 01:08:58 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:08:58.930131 | orchestrator | 2026-03-03 01:08:58 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:08:58.931154 | orchestrator | 2026-03-03 01:08:58 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:08:58.931744 | orchestrator | 2026-03-03 01:08:58 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:08:58.931764 | orchestrator | 2026-03-03 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:01.979383 | orchestrator | 2026-03-03 01:09:01 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:01.981078 | orchestrator | 2026-03-03 01:09:01 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:01.984276 | orchestrator | 2026-03-03 01:09:01 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:01.985806 | orchestrator | 2026-03-03 01:09:01 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:01.985956 | orchestrator | 2026-03-03 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:05.023077 | orchestrator | 2026-03-03 01:09:05 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:05.023161 | orchestrator | 2026-03-03 01:09:05 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:05.025410 | orchestrator | 2026-03-03 01:09:05 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:05.026211 | orchestrator | 2026-03-03 01:09:05 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:05.026271 | orchestrator | 2026-03-03 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:08.075644 | orchestrator | 2026-03-03 01:09:08 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:08.075940 | orchestrator | 2026-03-03 01:09:08 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:08.077308 | orchestrator | 2026-03-03 01:09:08 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:08.077994 | orchestrator | 2026-03-03 01:09:08 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:08.078061 | orchestrator | 2026-03-03 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:11.123890 | orchestrator | 2026-03-03 01:09:11 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:11.126777 | orchestrator | 2026-03-03 01:09:11 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:11.128949 | orchestrator | 2026-03-03 01:09:11 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:11.130762 | orchestrator | 2026-03-03 01:09:11 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:11.131320 | orchestrator | 2026-03-03 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:14.174694 | orchestrator | 2026-03-03 01:09:14 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:14.175672 | orchestrator | 2026-03-03 01:09:14 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:14.176541 | orchestrator | 2026-03-03 01:09:14 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:14.177685 | orchestrator | 2026-03-03 01:09:14 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:14.177715 | orchestrator | 2026-03-03 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:17.219520 | orchestrator | 2026-03-03 01:09:17 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:17.221386 | orchestrator | 2026-03-03 01:09:17 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:17.221737 | orchestrator | 2026-03-03 01:09:17 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:17.223296 | orchestrator | 2026-03-03 01:09:17 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:17.223319 | orchestrator | 2026-03-03 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:20.256870 | orchestrator | 2026-03-03 01:09:20 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:20.257161 | orchestrator | 2026-03-03 01:09:20 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:20.258283 | orchestrator | 2026-03-03 01:09:20 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:20.258935 | orchestrator | 2026-03-03 01:09:20 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:20.258994 | orchestrator | 2026-03-03 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:23.294116 | orchestrator | 2026-03-03 01:09:23 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:23.294468 | orchestrator | 2026-03-03 01:09:23 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:23.295847 | orchestrator | 2026-03-03 01:09:23 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:23.296614 | orchestrator | 2026-03-03 01:09:23 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:23.296662 | orchestrator | 2026-03-03 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:26.321448 | orchestrator | 2026-03-03 01:09:26 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:26.329090 | orchestrator | 2026-03-03 01:09:26 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:26.330298 | orchestrator | 2026-03-03 01:09:26 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:26.337483 | orchestrator | 2026-03-03 01:09:26 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:26.337552 | orchestrator | 2026-03-03 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:29.371410 | orchestrator | 2026-03-03 01:09:29 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:29.371466 | orchestrator | 2026-03-03 01:09:29 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:29.372172 | orchestrator | 2026-03-03 01:09:29 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:29.372723 | orchestrator | 2026-03-03 01:09:29 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:29.372747 | orchestrator | 2026-03-03 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:32.412415 | orchestrator | 2026-03-03 01:09:32 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:32.416465 | orchestrator | 2026-03-03 01:09:32 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:32.418336 | orchestrator | 2026-03-03 01:09:32 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:32.420138 | orchestrator | 2026-03-03 01:09:32 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:32.420168 | orchestrator | 2026-03-03 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:35.443743 | orchestrator | 2026-03-03 01:09:35 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:35.444277 | orchestrator | 2026-03-03 01:09:35 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:35.444866 | orchestrator | 2026-03-03 01:09:35 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:35.446469 | orchestrator | 2026-03-03 01:09:35 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:35.446504 | orchestrator | 2026-03-03 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:38.480426 | orchestrator | 2026-03-03 01:09:38 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:38.482399 | orchestrator | 2026-03-03 01:09:38 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:38.483276 | orchestrator | 2026-03-03 01:09:38 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:38.485472 | orchestrator | 2026-03-03 01:09:38 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:38.485529 | orchestrator | 2026-03-03 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:41.511576 | orchestrator | 2026-03-03 01:09:41 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:41.512169 | orchestrator | 2026-03-03 01:09:41 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:41.513078 | orchestrator | 2026-03-03 01:09:41 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:41.513907 | orchestrator | 2026-03-03 01:09:41 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:41.514064 | orchestrator | 2026-03-03 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:44.549078 | orchestrator | 2026-03-03 01:09:44 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:44.549566 | orchestrator | 2026-03-03 01:09:44 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:44.550346 | orchestrator | 2026-03-03 01:09:44 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:44.551034 | orchestrator | 2026-03-03 01:09:44 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:44.551131 | orchestrator | 2026-03-03 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:47.590200 | orchestrator | 2026-03-03 01:09:47 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:47.592661 | orchestrator | 2026-03-03 01:09:47 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:47.594683 | orchestrator | 2026-03-03 01:09:47 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:47.596911 | orchestrator | 2026-03-03 01:09:47 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:47.596953 | orchestrator | 2026-03-03 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:50.645430 | orchestrator | 2026-03-03 01:09:50 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:50.647775 | orchestrator | 2026-03-03 01:09:50 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:50.649509 | orchestrator | 2026-03-03 01:09:50 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:50.652232 | orchestrator | 2026-03-03 01:09:50 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:50.652280 | orchestrator | 2026-03-03 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:53.708416 | orchestrator | 2026-03-03 01:09:53 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:53.708495 | orchestrator | 2026-03-03 01:09:53 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:53.708505 | orchestrator | 2026-03-03 01:09:53 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state STARTED 2026-03-03 01:09:53.708511 | orchestrator | 2026-03-03 01:09:53 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:53.708518 | orchestrator | 2026-03-03 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:56.739528 | orchestrator | 2026-03-03 01:09:56 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state STARTED 2026-03-03 01:09:56.741087 | orchestrator | 2026-03-03 01:09:56 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:56.744952 | orchestrator | 2026-03-03 01:09:56 | INFO  | Task c3fbab06-435f-4da7-8107-1d0b5ebc87a4 is in state SUCCESS 2026-03-03 01:09:56.745403 | orchestrator | 2026-03-03 01:09:56.747910 | orchestrator | 2026-03-03 01:09:56.747962 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:09:56.747970 | orchestrator | 2026-03-03 01:09:56.747976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:09:56.748000 | orchestrator | Tuesday 03 March 2026 01:07:10 +0000 (0:00:00.371) 0:00:00.371 ********* 2026-03-03 01:09:56.748006 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:09:56.748012 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:09:56.748017 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:09:56.748022 | orchestrator | 2026-03-03 01:09:56.748028 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:09:56.748033 | orchestrator | Tuesday 03 March 2026 01:07:10 +0000 (0:00:00.328) 0:00:00.700 ********* 2026-03-03 01:09:56.748038 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-03 01:09:56.748044 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-03 01:09:56.748049 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-03 01:09:56.748064 | orchestrator | 2026-03-03 01:09:56.748074 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-03 01:09:56.748079 | orchestrator | 2026-03-03 01:09:56.748084 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-03 01:09:56.748090 | orchestrator | Tuesday 03 March 2026 01:07:11 +0000 (0:00:00.525) 0:00:01.225 ********* 2026-03-03 01:09:56.748095 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:09:56.748101 | orchestrator | 2026-03-03 01:09:56.748106 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-03 01:09:56.748111 | orchestrator | Tuesday 03 March 2026 01:07:11 +0000 (0:00:00.605) 0:00:01.831 ********* 2026-03-03 01:09:56.748117 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-03 01:09:56.748133 | orchestrator | 2026-03-03 01:09:56.748139 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-03 01:09:56.748144 | orchestrator | Tuesday 03 March 2026 01:07:16 +0000 (0:00:04.969) 0:00:06.801 ********* 2026-03-03 01:09:56.748150 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-03 01:09:56.748155 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-03 01:09:56.748161 | orchestrator | 2026-03-03 01:09:56.748166 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-03 01:09:56.748171 | orchestrator | Tuesday 03 March 2026 01:07:22 +0000 (0:00:06.209) 0:00:13.010 ********* 2026-03-03 01:09:56.748176 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-03 01:09:56.748181 | orchestrator | 2026-03-03 01:09:56.748186 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-03 01:09:56.748191 | orchestrator | Tuesday 03 March 2026 01:07:26 +0000 (0:00:03.581) 0:00:16.592 ********* 2026-03-03 01:09:56.748197 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-03 01:09:56.748202 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:09:56.748207 | orchestrator | 2026-03-03 01:09:56.748213 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-03 01:09:56.748218 | orchestrator | Tuesday 03 March 2026 01:07:30 +0000 (0:00:03.916) 0:00:20.508 ********* 2026-03-03 01:09:56.748223 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:09:56.748229 | orchestrator | 2026-03-03 01:09:56.748333 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-03 01:09:56.748342 | orchestrator | Tuesday 03 March 2026 01:07:33 +0000 (0:00:03.237) 0:00:23.746 ********* 2026-03-03 01:09:56.748348 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-03 01:09:56.748353 | orchestrator | 2026-03-03 01:09:56.748358 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-03 01:09:56.748363 | orchestrator | Tuesday 03 March 2026 01:07:37 +0000 (0:00:04.159) 0:00:27.905 ********* 2026-03-03 01:09:56.748390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748422 | orchestrator | 2026-03-03 01:09:56.748428 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-03 01:09:56.748433 | orchestrator | Tuesday 03 March 2026 01:07:43 +0000 (0:00:05.773) 0:00:33.679 ********* 2026-03-03 01:09:56.748438 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:09:56.748444 | orchestrator | 2026-03-03 01:09:56.748449 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-03 01:09:56.748459 | orchestrator | Tuesday 03 March 2026 01:07:44 +0000 (0:00:00.780) 0:00:34.459 ********* 2026-03-03 01:09:56.748464 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:56.748470 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.748475 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:56.748480 | orchestrator | 2026-03-03 01:09:56.748490 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-03 01:09:56.748495 | orchestrator | Tuesday 03 March 2026 01:07:48 +0000 (0:00:04.075) 0:00:38.535 ********* 2026-03-03 01:09:56.748501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:09:56.748506 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:09:56.748512 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:09:56.748517 | orchestrator | 2026-03-03 01:09:56.748522 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-03 01:09:56.748528 | orchestrator | Tuesday 03 March 2026 01:07:50 +0000 (0:00:01.835) 0:00:40.370 ********* 2026-03-03 01:09:56.748533 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:09:56.748538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:09:56.748544 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:09:56.748549 | orchestrator | 2026-03-03 01:09:56.748554 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-03 01:09:56.748560 | orchestrator | Tuesday 03 March 2026 01:07:51 +0000 (0:00:01.586) 0:00:41.956 ********* 2026-03-03 01:09:56.748565 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:09:56.748570 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:09:56.748576 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:09:56.748581 | orchestrator | 2026-03-03 01:09:56.748587 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-03 01:09:56.748592 | orchestrator | Tuesday 03 March 2026 01:07:52 +0000 (0:00:00.735) 0:00:42.691 ********* 2026-03-03 01:09:56.748597 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.748602 | orchestrator | 2026-03-03 01:09:56.748607 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-03 01:09:56.748613 | orchestrator | Tuesday 03 March 2026 01:07:52 +0000 (0:00:00.121) 0:00:42.813 ********* 2026-03-03 01:09:56.748621 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.748626 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.748632 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.748637 | orchestrator | 2026-03-03 01:09:56.748642 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-03 01:09:56.748647 | orchestrator | Tuesday 03 March 2026 01:07:52 +0000 (0:00:00.242) 0:00:43.056 ********* 2026-03-03 01:09:56.748652 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:09:56.748657 | orchestrator | 2026-03-03 01:09:56.748662 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-03 01:09:56.748667 | orchestrator | Tuesday 03 March 2026 01:07:53 +0000 (0:00:00.492) 0:00:43.549 ********* 2026-03-03 01:09:56.748676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748704 | orchestrator | 2026-03-03 01:09:56.748709 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-03 01:09:56.748714 | orchestrator | Tuesday 03 March 2026 01:07:58 +0000 (0:00:05.484) 0:00:49.034 ********* 2026-03-03 01:09:56.748724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:09:56.748730 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.748736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:09:56.748744 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.748756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:09:56.748762 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.748768 | orchestrator | 2026-03-03 01:09:56.748773 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-03 01:09:56.748778 | orchestrator | Tuesday 03 March 2026 01:08:02 +0000 (0:00:03.556) 0:00:52.591 ********* 2026-03-03 01:09:56.748783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:09:56.748798 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.748807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:09:56.748813 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.748822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-03 01:09:56.748831 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.748836 | orchestrator | 2026-03-03 01:09:56.748842 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-03 01:09:56.748847 | orchestrator | Tuesday 03 March 2026 01:08:05 +0000 (0:00:03.044) 0:00:55.636 ********* 2026-03-03 01:09:56.748852 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.748857 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.748863 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.748868 | orchestrator | 2026-03-03 01:09:56.748873 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-03 01:09:56.748878 | orchestrator | Tuesday 03 March 2026 01:08:09 +0000 (0:00:03.486) 0:00:59.122 ********* 2026-03-03 01:09:56.748886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.748911 | orchestrator | 2026-03-03 01:09:56.748916 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-03 01:09:56.748921 | orchestrator | Tuesday 03 March 2026 01:08:12 +0000 (0:00:03.433) 0:01:02.556 ********* 2026-03-03 01:09:56.748930 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:56.748935 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.748940 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:56.748945 | orchestrator | 2026-03-03 01:09:56.748950 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-03 01:09:56.748956 | orchestrator | Tuesday 03 March 2026 01:08:20 +0000 (0:00:07.636) 0:01:10.192 ********* 2026-03-03 01:09:56.748961 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.748966 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.748971 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.748976 | orchestrator | 2026-03-03 01:09:56.748981 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-03 01:09:56.748987 | orchestrator | Tuesday 03 March 2026 01:08:24 +0000 (0:00:04.800) 0:01:14.993 ********* 2026-03-03 01:09:56.748992 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.748997 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.749002 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.749008 | orchestrator | 2026-03-03 01:09:56.749013 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-03 01:09:56.749018 | orchestrator | Tuesday 03 March 2026 01:08:29 +0000 (0:00:04.636) 0:01:19.630 ********* 2026-03-03 01:09:56.749027 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.749036 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.749041 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.749046 | orchestrator | 2026-03-03 01:09:56.749051 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-03 01:09:56.749057 | orchestrator | Tuesday 03 March 2026 01:08:34 +0000 (0:00:04.727) 0:01:24.357 ********* 2026-03-03 01:09:56.749063 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.749068 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.749073 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.749079 | orchestrator | 2026-03-03 01:09:56.749084 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-03 01:09:56.749090 | orchestrator | Tuesday 03 March 2026 01:08:37 +0000 (0:00:02.934) 0:01:27.292 ********* 2026-03-03 01:09:56.749095 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.749100 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.749105 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.749111 | orchestrator | 2026-03-03 01:09:56.749116 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-03 01:09:56.749121 | orchestrator | Tuesday 03 March 2026 01:08:37 +0000 (0:00:00.259) 0:01:27.552 ********* 2026-03-03 01:09:56.749143 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-03 01:09:56.749149 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.749154 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-03 01:09:56.749160 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.749165 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-03 01:09:56.749170 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.749175 | orchestrator | 2026-03-03 01:09:56.749180 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-03 01:09:56.749185 | orchestrator | Tuesday 03 March 2026 01:08:40 +0000 (0:00:03.192) 0:01:30.744 ********* 2026-03-03 01:09:56.749190 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:56.749196 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749201 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:56.749206 | orchestrator | 2026-03-03 01:09:56.749212 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-03 01:09:56.749217 | orchestrator | Tuesday 03 March 2026 01:08:45 +0000 (0:00:04.760) 0:01:35.504 ********* 2026-03-03 01:09:56.749225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.749239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.749246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-03 01:09:56.749252 | orchestrator | 2026-03-03 01:09:56.749257 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-03 01:09:56.749262 | orchestrator | Tuesday 03 March 2026 01:08:49 +0000 (0:00:04.495) 0:01:40.000 ********* 2026-03-03 01:09:56.749268 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:56.749273 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:56.749281 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:56.749287 | orchestrator | 2026-03-03 01:09:56.749292 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-03 01:09:56.749297 | orchestrator | Tuesday 03 March 2026 01:08:50 +0000 (0:00:00.289) 0:01:40.290 ********* 2026-03-03 01:09:56.749302 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749307 | orchestrator | 2026-03-03 01:09:56.749313 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-03 01:09:56.749318 | orchestrator | Tuesday 03 March 2026 01:08:52 +0000 (0:00:02.003) 0:01:42.293 ********* 2026-03-03 01:09:56.749327 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749332 | orchestrator | 2026-03-03 01:09:56.749337 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-03 01:09:56.749343 | orchestrator | Tuesday 03 March 2026 01:08:54 +0000 (0:00:02.367) 0:01:44.661 ********* 2026-03-03 01:09:56.749348 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749353 | orchestrator | 2026-03-03 01:09:56.749358 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-03 01:09:56.749363 | orchestrator | Tuesday 03 March 2026 01:08:56 +0000 (0:00:01.989) 0:01:46.650 ********* 2026-03-03 01:09:56.749369 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749374 | orchestrator | 2026-03-03 01:09:56.749379 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-03 01:09:56.749385 | orchestrator | Tuesday 03 March 2026 01:09:20 +0000 (0:00:24.436) 0:02:11.086 ********* 2026-03-03 01:09:56.749390 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749395 | orchestrator | 2026-03-03 01:09:56.749400 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-03 01:09:56.749405 | orchestrator | Tuesday 03 March 2026 01:09:24 +0000 (0:00:03.687) 0:02:14.774 ********* 2026-03-03 01:09:56.749410 | orchestrator | 2026-03-03 01:09:56.749419 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-03 01:09:56.749424 | orchestrator | Tuesday 03 March 2026 01:09:24 +0000 (0:00:00.059) 0:02:14.833 ********* 2026-03-03 01:09:56.749430 | orchestrator | 2026-03-03 01:09:56.749435 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-03 01:09:56.749440 | orchestrator | Tuesday 03 March 2026 01:09:24 +0000 (0:00:00.077) 0:02:14.911 ********* 2026-03-03 01:09:56.749445 | orchestrator | 2026-03-03 01:09:56.749450 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-03 01:09:56.749456 | orchestrator | Tuesday 03 March 2026 01:09:24 +0000 (0:00:00.063) 0:02:14.975 ********* 2026-03-03 01:09:56.749461 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:56.749466 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:56.749471 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:56.749477 | orchestrator | 2026-03-03 01:09:56.749482 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:09:56.749489 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-03 01:09:56.749495 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-03 01:09:56.749500 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-03 01:09:56.749505 | orchestrator | 2026-03-03 01:09:56.749511 | orchestrator | 2026-03-03 01:09:56.749516 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:09:56.749521 | orchestrator | Tuesday 03 March 2026 01:09:54 +0000 (0:00:29.586) 0:02:44.563 ********* 2026-03-03 01:09:56.749526 | orchestrator | =============================================================================== 2026-03-03 01:09:56.749531 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.59s 2026-03-03 01:09:56.749536 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.44s 2026-03-03 01:09:56.749545 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.64s 2026-03-03 01:09:56.749550 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.21s 2026-03-03 01:09:56.749555 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.77s 2026-03-03 01:09:56.749560 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.48s 2026-03-03 01:09:56.749565 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.97s 2026-03-03 01:09:56.749569 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.80s 2026-03-03 01:09:56.749574 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.76s 2026-03-03 01:09:56.749579 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.73s 2026-03-03 01:09:56.749584 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.64s 2026-03-03 01:09:56.749589 | orchestrator | glance : Check glance containers ---------------------------------------- 4.50s 2026-03-03 01:09:56.749594 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.16s 2026-03-03 01:09:56.749599 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.08s 2026-03-03 01:09:56.749604 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.92s 2026-03-03 01:09:56.749610 | orchestrator | glance : Disable log_bin_trust_function_creators function --------------- 3.69s 2026-03-03 01:09:56.749615 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.58s 2026-03-03 01:09:56.749620 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.56s 2026-03-03 01:09:56.749626 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.49s 2026-03-03 01:09:56.749630 | orchestrator | glance : Copying over config.json files for services -------------------- 3.43s 2026-03-03 01:09:56.750768 | orchestrator | 2026-03-03 01:09:56 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:56.750845 | orchestrator | 2026-03-03 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:09:59.807437 | orchestrator | 2026-03-03 01:09:59 | INFO  | Task c9f2fe59-daf9-41d5-9a3d-e0aef29cef7d is in state SUCCESS 2026-03-03 01:09:59.809221 | orchestrator | 2026-03-03 01:09:59.809272 | orchestrator | 2026-03-03 01:09:59.809281 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:09:59.809288 | orchestrator | 2026-03-03 01:09:59.809294 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:09:59.809299 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.235) 0:00:00.235 ********* 2026-03-03 01:09:59.809305 | orchestrator | ok: [testbed-manager] 2026-03-03 01:09:59.809312 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:09:59.809317 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:09:59.809323 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:09:59.809329 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:09:59.809335 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:09:59.809342 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:09:59.809348 | orchestrator | 2026-03-03 01:09:59.809355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:09:59.809363 | orchestrator | Tuesday 03 March 2026 01:07:03 +0000 (0:00:00.765) 0:00:01.001 ********* 2026-03-03 01:09:59.809369 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809375 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809380 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809386 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809392 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809398 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809420 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-03 01:09:59.809424 | orchestrator | 2026-03-03 01:09:59.809427 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-03 01:09:59.809430 | orchestrator | 2026-03-03 01:09:59.809434 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-03 01:09:59.809437 | orchestrator | Tuesday 03 March 2026 01:07:04 +0000 (0:00:00.566) 0:00:01.567 ********* 2026-03-03 01:09:59.809441 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:09:59.809445 | orchestrator | 2026-03-03 01:09:59.809448 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-03 01:09:59.809451 | orchestrator | Tuesday 03 March 2026 01:07:05 +0000 (0:00:01.367) 0:00:02.935 ********* 2026-03-03 01:09:59.809456 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:09:59.809462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809511 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.809517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809547 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:09:59.809894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.809922 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.809955 | orchestrator | 2026-03-03 01:09:59.809961 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-03 01:09:59.809965 | orchestrator | Tuesday 03 March 2026 01:07:09 +0000 (0:00:03.713) 0:00:06.648 ********* 2026-03-03 01:09:59.809971 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:09:59.809976 | orchestrator | 2026-03-03 01:09:59.809981 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-03 01:09:59.809985 | orchestrator | Tuesday 03 March 2026 01:07:11 +0000 (0:00:01.657) 0:00:08.305 ********* 2026-03-03 01:09:59.809991 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:09:59.809996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810051 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810075 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.810080 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810119 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:09:59.810125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.810214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.810236 | orchestrator | 2026-03-03 01:09:59.810582 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-03 01:09:59.810596 | orchestrator | Tuesday 03 March 2026 01:07:17 +0000 (0:00:06.606) 0:00:14.912 ********* 2026-03-03 01:09:59.810600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.810605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-03 01:09:59.810828 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.810831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810835 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-03 01:09:59.810839 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.810850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.810878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810885 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.810890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810900 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.810908 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.810913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.810924 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.810946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.810953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810963 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.810969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.810974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.810990 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.810996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811030 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.811035 | orchestrator | 2026-03-03 01:09:59.811041 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-03 01:09:59.811046 | orchestrator | Tuesday 03 March 2026 01:07:19 +0000 (0:00:01.413) 0:00:16.325 ********* 2026-03-03 01:09:59.811051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811070 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.811075 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-03 01:09:59.811081 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811288 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-03 01:09:59.811292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811338 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.811343 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.811346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811366 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.811369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811392 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.811395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-03 01:09:59.811414 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.811417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-03 01:09:59.811421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-03 01:09:59.811439 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.811443 | orchestrator | 2026-03-03 01:09:59.811446 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-03 01:09:59.811449 | orchestrator | Tuesday 03 March 2026 01:07:20 +0000 (0:00:01.772) 0:00:18.098 ********* 2026-03-03 01:09:59.811452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811458 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:09:59.811462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.811515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811538 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811593 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:09:59.811609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.811629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.811645 | orchestrator | 2026-03-03 01:09:59.811651 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-03 01:09:59.811654 | orchestrator | Tuesday 03 March 2026 01:07:26 +0000 (0:00:05.687) 0:00:23.785 ********* 2026-03-03 01:09:59.811658 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 01:09:59.811662 | orchestrator | 2026-03-03 01:09:59.811667 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-03 01:09:59.811680 | orchestrator | Tuesday 03 March 2026 01:07:28 +0000 (0:00:01.386) 0:00:25.172 ********* 2026-03-03 01:09:59.811685 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811689 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811693 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811697 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811701 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811705 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811728 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811732 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811736 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811740 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099794, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.811744 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811748 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811766 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811770 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811773 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811776 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811779 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811783 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811786 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811802 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811806 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811809 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811813 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811816 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1099823, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6417289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.811819 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811830 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811847 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811850 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811853 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811856 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811862 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811868 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811881 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811885 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811888 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811891 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811895 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811900 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811904 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1099787, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6372662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.811918 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811922 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811925 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811980 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811985 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811992 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.811995 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812012 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812016 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812020 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812023 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812029 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812032 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812035 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812050 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812054 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099812, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6401303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812057 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812062 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812074 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812081 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812087 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812112 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812118 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812123 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812139 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812149 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812154 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812159 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812183 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812189 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812195 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812210 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812215 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812220 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812232 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812235 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812241 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812245 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812248 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812258 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812262 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812265 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099775, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6361513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812271 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812293 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812297 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812304 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812307 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812311 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812316 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812320 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099801, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6386063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812330 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812342 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812350 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812358 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.812364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812369 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812374 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812379 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.812384 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812389 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812405 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812411 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.812417 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812426 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812432 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1099810, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6397326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812437 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812443 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812448 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812461 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812471 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.812477 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812482 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.812487 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-03 01:09:59.812493 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.812498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099803, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812504 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099791, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6375976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099820, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.641352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812515 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099772, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.634561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812527 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099838, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6428668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812534 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099818, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6410823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812538 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099784, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6367226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1099773, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6347132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812544 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099808, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6395311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812547 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099805, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6390955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099836, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.642548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-03 01:09:59.812554 | orchestrator | 2026-03-03 01:09:59.812559 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-03 01:09:59.812563 | orchestrator | Tuesday 03 March 2026 01:07:53 +0000 (0:00:25.407) 0:00:50.579 ********* 2026-03-03 01:09:59.812568 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 01:09:59.812571 | orchestrator | 2026-03-03 01:09:59.812576 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-03 01:09:59.812579 | orchestrator | Tuesday 03 March 2026 01:07:54 +0000 (0:00:00.736) 0:00:51.315 ********* 2026-03-03 01:09:59.812583 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812587 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812590 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812597 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812601 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:09:59.812605 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812610 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812618 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812629 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812635 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 01:09:59.812639 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812649 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812659 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812664 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-03 01:09:59.812669 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812681 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812692 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812697 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-03 01:09:59.812703 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812707 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812711 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812715 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812718 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812722 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 01:09:59.812726 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812729 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812733 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812741 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812744 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-03 01:09:59.812748 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.812751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812755 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-03 01:09:59.812759 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-03 01:09:59.812769 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-03 01:09:59.812777 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-03 01:09:59.812785 | orchestrator | 2026-03-03 01:09:59.812790 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-03 01:09:59.812794 | orchestrator | Tuesday 03 March 2026 01:07:57 +0000 (0:00:02.888) 0:00:54.204 ********* 2026-03-03 01:09:59.812799 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-03 01:09:59.812804 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.812809 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-03 01:09:59.812814 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.812819 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-03 01:09:59.812824 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.812829 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-03 01:09:59.812834 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.812838 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-03 01:09:59.812844 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.812849 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-03 01:09:59.812853 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.812858 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-03 01:09:59.812863 | orchestrator | 2026-03-03 01:09:59.812868 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-03 01:09:59.812873 | orchestrator | Tuesday 03 March 2026 01:08:12 +0000 (0:00:15.490) 0:01:09.694 ********* 2026-03-03 01:09:59.812886 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-03 01:09:59.812892 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.812897 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-03 01:09:59.812903 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.812908 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-03 01:09:59.812914 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.812919 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-03 01:09:59.812925 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.812929 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-03 01:09:59.812933 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.812936 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-03 01:09:59.812940 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.812944 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-03 01:09:59.812947 | orchestrator | 2026-03-03 01:09:59.812951 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-03 01:09:59.812955 | orchestrator | Tuesday 03 March 2026 01:08:16 +0000 (0:00:04.434) 0:01:14.128 ********* 2026-03-03 01:09:59.812959 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-03 01:09:59.812963 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-03 01:09:59.812968 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-03 01:09:59.812980 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.812987 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-03 01:09:59.812994 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.812999 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.813004 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-03 01:09:59.813009 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813014 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-03 01:09:59.813020 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813024 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-03 01:09:59.813030 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813034 | orchestrator | 2026-03-03 01:09:59.813039 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-03 01:09:59.813044 | orchestrator | Tuesday 03 March 2026 01:08:19 +0000 (0:00:02.489) 0:01:16.617 ********* 2026-03-03 01:09:59.813049 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 01:09:59.813054 | orchestrator | 2026-03-03 01:09:59.813060 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-03 01:09:59.813066 | orchestrator | Tuesday 03 March 2026 01:08:20 +0000 (0:00:00.611) 0:01:17.229 ********* 2026-03-03 01:09:59.813072 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.813077 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.813083 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.813087 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.813091 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813095 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813098 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813102 | orchestrator | 2026-03-03 01:09:59.813106 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-03 01:09:59.813109 | orchestrator | Tuesday 03 March 2026 01:08:20 +0000 (0:00:00.618) 0:01:17.848 ********* 2026-03-03 01:09:59.813113 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.813116 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813120 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813124 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813140 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:59.813146 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:59.813151 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:59.813159 | orchestrator | 2026-03-03 01:09:59.813166 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-03 01:09:59.813171 | orchestrator | Tuesday 03 March 2026 01:08:23 +0000 (0:00:02.877) 0:01:20.726 ********* 2026-03-03 01:09:59.813176 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813181 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.813187 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813192 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.813196 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813201 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.813207 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813212 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.813231 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813243 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813248 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813254 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813260 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-03 01:09:59.813265 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813270 | orchestrator | 2026-03-03 01:09:59.813276 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-03 01:09:59.813282 | orchestrator | Tuesday 03 March 2026 01:08:25 +0000 (0:00:02.203) 0:01:22.929 ********* 2026-03-03 01:09:59.813288 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-03 01:09:59.813293 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-03 01:09:59.813299 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.813304 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.813308 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-03 01:09:59.813312 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-03 01:09:59.813316 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.813320 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-03 01:09:59.813323 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813327 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-03 01:09:59.813331 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813334 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-03 01:09:59.813338 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813342 | orchestrator | 2026-03-03 01:09:59.813345 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-03 01:09:59.813349 | orchestrator | Tuesday 03 March 2026 01:08:27 +0000 (0:00:02.033) 0:01:24.963 ********* 2026-03-03 01:09:59.813353 | orchestrator | [WARNING]: Skipped 2026-03-03 01:09:59.813357 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-03 01:09:59.813361 | orchestrator | due to this access issue: 2026-03-03 01:09:59.813365 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-03 01:09:59.813368 | orchestrator | not a directory 2026-03-03 01:09:59.813372 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-03 01:09:59.813446 | orchestrator | 2026-03-03 01:09:59.813452 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-03 01:09:59.813455 | orchestrator | Tuesday 03 March 2026 01:08:28 +0000 (0:00:00.952) 0:01:25.916 ********* 2026-03-03 01:09:59.813459 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.813463 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.813467 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.813470 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.813474 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813478 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813481 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813485 | orchestrator | 2026-03-03 01:09:59.813489 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-03 01:09:59.813492 | orchestrator | Tuesday 03 March 2026 01:08:29 +0000 (0:00:00.849) 0:01:26.765 ********* 2026-03-03 01:09:59.813496 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.813500 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:09:59.813503 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:09:59.813507 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:09:59.813514 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:09:59.813518 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:09:59.813522 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:09:59.813528 | orchestrator | 2026-03-03 01:09:59.813535 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-03 01:09:59.813542 | orchestrator | Tuesday 03 March 2026 01:08:30 +0000 (0:00:00.941) 0:01:27.707 ********* 2026-03-03 01:09:59.813577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-03 01:09:59.813603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813637 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-03 01:09:59.813661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813682 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813707 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-03 01:09:59.813714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-03 01:09:59.813734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-03 01:09:59.813748 | orchestrator | 2026-03-03 01:09:59.813752 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-03 01:09:59.813756 | orchestrator | Tuesday 03 March 2026 01:08:34 +0000 (0:00:04.267) 0:01:31.975 ********* 2026-03-03 01:09:59.813759 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-03 01:09:59.813763 | orchestrator | skipping: [testbed-manager] 2026-03-03 01:09:59.813767 | orchestrator | 2026-03-03 01:09:59.813771 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813774 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:01.322) 0:01:33.297 ********* 2026-03-03 01:09:59.813778 | orchestrator | 2026-03-03 01:09:59.813782 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813786 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.072) 0:01:33.370 ********* 2026-03-03 01:09:59.813789 | orchestrator | 2026-03-03 01:09:59.813793 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813797 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.059) 0:01:33.429 ********* 2026-03-03 01:09:59.813801 | orchestrator | 2026-03-03 01:09:59.813804 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813808 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.060) 0:01:33.490 ********* 2026-03-03 01:09:59.813811 | orchestrator | 2026-03-03 01:09:59.813815 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813819 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.194) 0:01:33.685 ********* 2026-03-03 01:09:59.813823 | orchestrator | 2026-03-03 01:09:59.813827 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813830 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.052) 0:01:33.737 ********* 2026-03-03 01:09:59.813834 | orchestrator | 2026-03-03 01:09:59.813838 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-03 01:09:59.813841 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.058) 0:01:33.796 ********* 2026-03-03 01:09:59.813845 | orchestrator | 2026-03-03 01:09:59.813849 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-03 01:09:59.813852 | orchestrator | Tuesday 03 March 2026 01:08:36 +0000 (0:00:00.068) 0:01:33.865 ********* 2026-03-03 01:09:59.813856 | orchestrator | changed: [testbed-manager] 2026-03-03 01:09:59.813860 | orchestrator | 2026-03-03 01:09:59.813865 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-03 01:09:59.813871 | orchestrator | Tuesday 03 March 2026 01:08:51 +0000 (0:00:15.167) 0:01:49.032 ********* 2026-03-03 01:09:59.813875 | orchestrator | changed: [testbed-manager] 2026-03-03 01:09:59.813878 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:09:59.813882 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:59.813886 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:09:59.813889 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:09:59.813893 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:59.813897 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:59.813901 | orchestrator | 2026-03-03 01:09:59.813904 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-03 01:09:59.813908 | orchestrator | Tuesday 03 March 2026 01:09:03 +0000 (0:00:11.850) 0:02:00.883 ********* 2026-03-03 01:09:59.813913 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:59.813923 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:59.813930 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:59.813935 | orchestrator | 2026-03-03 01:09:59.813940 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-03 01:09:59.813945 | orchestrator | Tuesday 03 March 2026 01:09:08 +0000 (0:00:04.869) 0:02:05.752 ********* 2026-03-03 01:09:59.813950 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:59.813955 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:59.813960 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:59.813964 | orchestrator | 2026-03-03 01:09:59.813969 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-03 01:09:59.813973 | orchestrator | Tuesday 03 March 2026 01:09:19 +0000 (0:00:10.564) 0:02:16.316 ********* 2026-03-03 01:09:59.813978 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:59.813983 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:59.813988 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:09:59.813993 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:09:59.813998 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:59.814005 | orchestrator | changed: [testbed-manager] 2026-03-03 01:09:59.814046 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:09:59.814055 | orchestrator | 2026-03-03 01:09:59.814063 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-03 01:09:59.814071 | orchestrator | Tuesday 03 March 2026 01:09:34 +0000 (0:00:15.278) 0:02:31.595 ********* 2026-03-03 01:09:59.814080 | orchestrator | changed: [testbed-manager] 2026-03-03 01:09:59.814089 | orchestrator | 2026-03-03 01:09:59.814095 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-03 01:09:59.814100 | orchestrator | Tuesday 03 March 2026 01:09:41 +0000 (0:00:06.754) 0:02:38.349 ********* 2026-03-03 01:09:59.814106 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:09:59.814112 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:09:59.814118 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:09:59.814123 | orchestrator | 2026-03-03 01:09:59.814145 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-03 01:09:59.814152 | orchestrator | Tuesday 03 March 2026 01:09:46 +0000 (0:00:05.520) 0:02:43.870 ********* 2026-03-03 01:09:59.814157 | orchestrator | changed: [testbed-manager] 2026-03-03 01:09:59.814163 | orchestrator | 2026-03-03 01:09:59.814168 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-03 01:09:59.814172 | orchestrator | Tuesday 03 March 2026 01:09:52 +0000 (0:00:05.303) 0:02:49.174 ********* 2026-03-03 01:09:59.814176 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:09:59.814180 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:09:59.814184 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:09:59.814187 | orchestrator | 2026-03-03 01:09:59.814191 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:09:59.814195 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-03 01:09:59.814200 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-03 01:09:59.814204 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-03 01:09:59.814207 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-03 01:09:59.814211 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-03 01:09:59.814215 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-03 01:09:59.814223 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-03 01:09:59.814227 | orchestrator | 2026-03-03 01:09:59.814231 | orchestrator | 2026-03-03 01:09:59.814235 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:09:59.814238 | orchestrator | Tuesday 03 March 2026 01:09:58 +0000 (0:00:06.887) 0:02:56.062 ********* 2026-03-03 01:09:59.814242 | orchestrator | =============================================================================== 2026-03-03 01:09:59.814246 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.41s 2026-03-03 01:09:59.814250 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.49s 2026-03-03 01:09:59.814253 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.28s 2026-03-03 01:09:59.814257 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.17s 2026-03-03 01:09:59.814264 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.85s 2026-03-03 01:09:59.814271 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.56s 2026-03-03 01:09:59.814275 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.89s 2026-03-03 01:09:59.814279 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.75s 2026-03-03 01:09:59.814283 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.61s 2026-03-03 01:09:59.814287 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.69s 2026-03-03 01:09:59.814291 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.52s 2026-03-03 01:09:59.814294 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.30s 2026-03-03 01:09:59.814298 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.87s 2026-03-03 01:09:59.814302 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.43s 2026-03-03 01:09:59.814306 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.27s 2026-03-03 01:09:59.814309 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.71s 2026-03-03 01:09:59.814313 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.89s 2026-03-03 01:09:59.814317 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.88s 2026-03-03 01:09:59.814320 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.49s 2026-03-03 01:09:59.814324 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.20s 2026-03-03 01:09:59.814328 | orchestrator | 2026-03-03 01:09:59 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:09:59.814331 | orchestrator | 2026-03-03 01:09:59 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:09:59.814335 | orchestrator | 2026-03-03 01:09:59 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:09:59.814339 | orchestrator | 2026-03-03 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:02.844263 | orchestrator | 2026-03-03 01:10:02 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:02.845455 | orchestrator | 2026-03-03 01:10:02 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:02.845987 | orchestrator | 2026-03-03 01:10:02 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:02.850452 | orchestrator | 2026-03-03 01:10:02 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state STARTED 2026-03-03 01:10:02.850512 | orchestrator | 2026-03-03 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:05.888944 | orchestrator | 2026-03-03 01:10:05 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:05.890962 | orchestrator | 2026-03-03 01:10:05 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:05.892535 | orchestrator | 2026-03-03 01:10:05 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:05.896472 | orchestrator | 2026-03-03 01:10:05 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:05.900106 | orchestrator | 2026-03-03 01:10:05 | INFO  | Task 02d711a0-ebb1-41b3-890b-94476b913a3a is in state SUCCESS 2026-03-03 01:10:05.901727 | orchestrator | 2026-03-03 01:10:05.901775 | orchestrator | 2026-03-03 01:10:05.901781 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:10:05.901787 | orchestrator | 2026-03-03 01:10:05.901792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:10:05.901797 | orchestrator | Tuesday 03 March 2026 01:07:13 +0000 (0:00:00.395) 0:00:00.395 ********* 2026-03-03 01:10:05.901802 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:10:05.901808 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:10:05.901813 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:10:05.901818 | orchestrator | 2026-03-03 01:10:05.901857 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:10:05.901861 | orchestrator | Tuesday 03 March 2026 01:07:13 +0000 (0:00:00.319) 0:00:00.715 ********* 2026-03-03 01:10:05.901864 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-03 01:10:05.901868 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-03 01:10:05.901872 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-03 01:10:05.901875 | orchestrator | 2026-03-03 01:10:05.901888 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-03 01:10:05.901892 | orchestrator | 2026-03-03 01:10:05.901895 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-03 01:10:05.901899 | orchestrator | Tuesday 03 March 2026 01:07:13 +0000 (0:00:00.415) 0:00:01.130 ********* 2026-03-03 01:10:05.901902 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:10:05.901906 | orchestrator | 2026-03-03 01:10:05.901909 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-03 01:10:05.901912 | orchestrator | Tuesday 03 March 2026 01:07:14 +0000 (0:00:00.586) 0:00:01.717 ********* 2026-03-03 01:10:05.901923 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-03 01:10:05.901926 | orchestrator | 2026-03-03 01:10:05.901930 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-03 01:10:05.901933 | orchestrator | Tuesday 03 March 2026 01:07:18 +0000 (0:00:04.025) 0:00:05.743 ********* 2026-03-03 01:10:05.901936 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-03 01:10:05.901940 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-03 01:10:05.901943 | orchestrator | 2026-03-03 01:10:05.901946 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-03 01:10:05.901949 | orchestrator | Tuesday 03 March 2026 01:07:25 +0000 (0:00:06.743) 0:00:12.486 ********* 2026-03-03 01:10:05.901952 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:10:05.901956 | orchestrator | 2026-03-03 01:10:05.901959 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-03 01:10:05.901962 | orchestrator | Tuesday 03 March 2026 01:07:28 +0000 (0:00:03.626) 0:00:16.112 ********* 2026-03-03 01:10:05.901965 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-03 01:10:05.901989 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:10:05.901992 | orchestrator | 2026-03-03 01:10:05.902005 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-03 01:10:05.902009 | orchestrator | Tuesday 03 March 2026 01:07:32 +0000 (0:00:03.798) 0:00:19.910 ********* 2026-03-03 01:10:05.902068 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:10:05.902073 | orchestrator | 2026-03-03 01:10:05.902077 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-03 01:10:05.902080 | orchestrator | Tuesday 03 March 2026 01:07:36 +0000 (0:00:03.400) 0:00:23.310 ********* 2026-03-03 01:10:05.902084 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-03 01:10:05.902087 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-03 01:10:05.902090 | orchestrator | 2026-03-03 01:10:05.902093 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-03 01:10:05.902096 | orchestrator | Tuesday 03 March 2026 01:07:43 +0000 (0:00:07.567) 0:00:30.878 ********* 2026-03-03 01:10:05.902102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.902117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.902217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.902221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902403 | orchestrator | 2026-03-03 01:10:05.902407 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-03 01:10:05.902412 | orchestrator | Tuesday 03 March 2026 01:07:45 +0000 (0:00:02.199) 0:00:33.078 ********* 2026-03-03 01:10:05.902418 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.902423 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.902428 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.902433 | orchestrator | 2026-03-03 01:10:05.902439 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-03 01:10:05.902445 | orchestrator | Tuesday 03 March 2026 01:07:46 +0000 (0:00:00.401) 0:00:33.479 ********* 2026-03-03 01:10:05.902451 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:10:05.902456 | orchestrator | 2026-03-03 01:10:05.902462 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-03 01:10:05.902467 | orchestrator | Tuesday 03 March 2026 01:07:47 +0000 (0:00:01.416) 0:00:34.896 ********* 2026-03-03 01:10:05.902477 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-03 01:10:05.902483 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-03 01:10:05.902488 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-03 01:10:05.902576 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-03 01:10:05.902583 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-03 01:10:05.902586 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-03 01:10:05.902589 | orchestrator | 2026-03-03 01:10:05.902592 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-03 01:10:05.902596 | orchestrator | Tuesday 03 March 2026 01:07:49 +0000 (0:00:02.052) 0:00:36.948 ********* 2026-03-03 01:10:05.902602 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-03 01:10:05.902611 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-03 01:10:05.902614 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-03 01:10:05.902618 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-03 01:10:05.902633 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-03 01:10:05.902637 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-03 01:10:05.902646 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-03 01:10:05.902650 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-03 01:10:05.902653 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-03 01:10:05.902666 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-03 01:10:05.902670 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-03 01:10:05.902678 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-03 01:10:05.902681 | orchestrator | 2026-03-03 01:10:05.902684 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-03 01:10:05.902687 | orchestrator | Tuesday 03 March 2026 01:07:53 +0000 (0:00:03.425) 0:00:40.373 ********* 2026-03-03 01:10:05.902691 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:10:05.902694 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:10:05.902697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-03 01:10:05.902700 | orchestrator | 2026-03-03 01:10:05.902703 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-03 01:10:05.902707 | orchestrator | Tuesday 03 March 2026 01:07:55 +0000 (0:00:02.114) 0:00:42.488 ********* 2026-03-03 01:10:05.902710 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-03 01:10:05.902713 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-03 01:10:05.902716 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-03 01:10:05.902719 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:10:05.902722 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:10:05.902725 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-03 01:10:05.902728 | orchestrator | 2026-03-03 01:10:05.902731 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-03 01:10:05.902735 | orchestrator | Tuesday 03 March 2026 01:07:58 +0000 (0:00:03.485) 0:00:45.974 ********* 2026-03-03 01:10:05.902738 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-03 01:10:05.902742 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-03 01:10:05.902747 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-03 01:10:05.902752 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-03 01:10:05.902757 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-03 01:10:05.902764 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-03 01:10:05.902769 | orchestrator | 2026-03-03 01:10:05.902775 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-03 01:10:05.902780 | orchestrator | Tuesday 03 March 2026 01:07:59 +0000 (0:00:01.037) 0:00:47.012 ********* 2026-03-03 01:10:05.902785 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.902790 | orchestrator | 2026-03-03 01:10:05.902798 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-03 01:10:05.902803 | orchestrator | Tuesday 03 March 2026 01:07:59 +0000 (0:00:00.178) 0:00:47.190 ********* 2026-03-03 01:10:05.902809 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.902814 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.902823 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.902829 | orchestrator | 2026-03-03 01:10:05.902834 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-03 01:10:05.902839 | orchestrator | Tuesday 03 March 2026 01:08:00 +0000 (0:00:00.462) 0:00:47.653 ********* 2026-03-03 01:10:05.902844 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:10:05.902850 | orchestrator | 2026-03-03 01:10:05.902855 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-03 01:10:05.902876 | orchestrator | Tuesday 03 March 2026 01:08:01 +0000 (0:00:01.071) 0:00:48.725 ********* 2026-03-03 01:10:05.902882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.902890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.902895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.902900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.902962 | orchestrator | 2026-03-03 01:10:05.902967 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-03 01:10:05.902973 | orchestrator | Tuesday 03 March 2026 01:08:05 +0000 (0:00:03.888) 0:00:52.613 ********* 2026-03-03 01:10:05.902980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.902984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.902989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.902994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903003 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.903011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903032 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.903038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903069 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.903074 | orchestrator | 2026-03-03 01:10:05.903078 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-03 01:10:05.903083 | orchestrator | Tuesday 03 March 2026 01:08:06 +0000 (0:00:00.786) 0:00:53.400 ********* 2026-03-03 01:10:05.903088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903114 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.903119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903158 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.903163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903191 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.903196 | orchestrator | 2026-03-03 01:10:05.903201 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-03 01:10:05.903210 | orchestrator | Tuesday 03 March 2026 01:08:07 +0000 (0:00:01.769) 0:00:55.169 ********* 2026-03-03 01:10:05.903216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903299 | orchestrator | 2026-03-03 01:10:05.903304 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-03 01:10:05.903309 | orchestrator | Tuesday 03 March 2026 01:08:11 +0000 (0:00:03.950) 0:00:59.120 ********* 2026-03-03 01:10:05.903314 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-03 01:10:05.903320 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-03 01:10:05.903325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-03 01:10:05.903339 | orchestrator | 2026-03-03 01:10:05.903344 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-03 01:10:05.903354 | orchestrator | Tuesday 03 March 2026 01:08:13 +0000 (0:00:01.525) 0:01:00.645 ********* 2026-03-03 01:10:05.903364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903453 | orchestrator | 2026-03-03 01:10:05.903459 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-03 01:10:05.903463 | orchestrator | Tuesday 03 March 2026 01:08:28 +0000 (0:00:15.127) 0:01:15.772 ********* 2026-03-03 01:10:05.903468 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903472 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:10:05.903477 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:10:05.903481 | orchestrator | 2026-03-03 01:10:05.903486 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-03 01:10:05.903493 | orchestrator | Tuesday 03 March 2026 01:08:30 +0000 (0:00:01.738) 0:01:17.511 ********* 2026-03-03 01:10:05.903498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903527 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.903533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903568 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.903573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-03 01:10:05.903582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-03 01:10:05.903606 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.903612 | orchestrator | 2026-03-03 01:10:05.903616 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-03 01:10:05.903621 | orchestrator | Tuesday 03 March 2026 01:08:31 +0000 (0:00:01.190) 0:01:18.701 ********* 2026-03-03 01:10:05.903626 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.903631 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.903637 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.903642 | orchestrator | 2026-03-03 01:10:05.903647 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-03 01:10:05.903652 | orchestrator | Tuesday 03 March 2026 01:08:32 +0000 (0:00:00.611) 0:01:19.313 ********* 2026-03-03 01:10:05.903661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-03 01:10:05.903681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-03 01:10:05.903753 | orchestrator | 2026-03-03 01:10:05.903758 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-03 01:10:05.903763 | orchestrator | Tuesday 03 March 2026 01:08:35 +0000 (0:00:03.402) 0:01:22.715 ********* 2026-03-03 01:10:05.903768 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.903773 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:10:05.903779 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:10:05.903784 | orchestrator | 2026-03-03 01:10:05.903789 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-03 01:10:05.903794 | orchestrator | Tuesday 03 March 2026 01:08:35 +0000 (0:00:00.432) 0:01:23.148 ********* 2026-03-03 01:10:05.903799 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903803 | orchestrator | 2026-03-03 01:10:05.903806 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-03 01:10:05.903809 | orchestrator | Tuesday 03 March 2026 01:08:38 +0000 (0:00:02.156) 0:01:25.304 ********* 2026-03-03 01:10:05.903812 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903815 | orchestrator | 2026-03-03 01:10:05.903818 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-03 01:10:05.903821 | orchestrator | Tuesday 03 March 2026 01:08:40 +0000 (0:00:02.080) 0:01:27.385 ********* 2026-03-03 01:10:05.903824 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903827 | orchestrator | 2026-03-03 01:10:05.903831 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-03 01:10:05.903834 | orchestrator | Tuesday 03 March 2026 01:08:56 +0000 (0:00:15.867) 0:01:43.252 ********* 2026-03-03 01:10:05.903837 | orchestrator | 2026-03-03 01:10:05.903840 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-03 01:10:05.903843 | orchestrator | Tuesday 03 March 2026 01:08:56 +0000 (0:00:00.118) 0:01:43.371 ********* 2026-03-03 01:10:05.903846 | orchestrator | 2026-03-03 01:10:05.903849 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-03 01:10:05.903852 | orchestrator | Tuesday 03 March 2026 01:08:56 +0000 (0:00:00.053) 0:01:43.424 ********* 2026-03-03 01:10:05.903855 | orchestrator | 2026-03-03 01:10:05.903858 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-03 01:10:05.903861 | orchestrator | Tuesday 03 March 2026 01:08:56 +0000 (0:00:00.051) 0:01:43.476 ********* 2026-03-03 01:10:05.903867 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903870 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:10:05.903874 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:10:05.903877 | orchestrator | 2026-03-03 01:10:05.903880 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-03 01:10:05.903883 | orchestrator | Tuesday 03 March 2026 01:09:19 +0000 (0:00:22.922) 0:02:06.398 ********* 2026-03-03 01:10:05.903886 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:10:05.903889 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903892 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:10:05.903895 | orchestrator | 2026-03-03 01:10:05.903898 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-03 01:10:05.903902 | orchestrator | Tuesday 03 March 2026 01:09:33 +0000 (0:00:14.180) 0:02:20.579 ********* 2026-03-03 01:10:05.903905 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903908 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:10:05.903911 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:10:05.903914 | orchestrator | 2026-03-03 01:10:05.903917 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-03 01:10:05.903920 | orchestrator | Tuesday 03 March 2026 01:09:50 +0000 (0:00:17.560) 0:02:38.139 ********* 2026-03-03 01:10:05.903923 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:10:05.903927 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:10:05.903930 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:10:05.903933 | orchestrator | 2026-03-03 01:10:05.903936 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-03 01:10:05.903942 | orchestrator | Tuesday 03 March 2026 01:10:02 +0000 (0:00:12.069) 0:02:50.209 ********* 2026-03-03 01:10:05.903945 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:10:05.903948 | orchestrator | 2026-03-03 01:10:05.903951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:10:05.903955 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-03 01:10:05.903959 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:10:05.903962 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:10:05.903965 | orchestrator | 2026-03-03 01:10:05.903968 | orchestrator | 2026-03-03 01:10:05.903971 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:10:05.903975 | orchestrator | Tuesday 03 March 2026 01:10:03 +0000 (0:00:00.232) 0:02:50.441 ********* 2026-03-03 01:10:05.903978 | orchestrator | =============================================================================== 2026-03-03 01:10:05.903981 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.92s 2026-03-03 01:10:05.903984 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 17.56s 2026-03-03 01:10:05.903988 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 15.87s 2026-03-03 01:10:05.903991 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.13s 2026-03-03 01:10:05.903996 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 14.18s 2026-03-03 01:10:05.904000 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.07s 2026-03-03 01:10:05.904005 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.57s 2026-03-03 01:10:05.904009 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.74s 2026-03-03 01:10:05.904017 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.03s 2026-03-03 01:10:05.904022 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.95s 2026-03-03 01:10:05.904031 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.89s 2026-03-03 01:10:05.904036 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.80s 2026-03-03 01:10:05.904040 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.63s 2026-03-03 01:10:05.904045 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.49s 2026-03-03 01:10:05.904051 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.42s 2026-03-03 01:10:05.904056 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.40s 2026-03-03 01:10:05.904061 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.40s 2026-03-03 01:10:05.904066 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.20s 2026-03-03 01:10:05.904072 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.16s 2026-03-03 01:10:05.904076 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.11s 2026-03-03 01:10:05.904082 | orchestrator | 2026-03-03 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:08.954526 | orchestrator | 2026-03-03 01:10:08 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:08.958606 | orchestrator | 2026-03-03 01:10:08 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:08.962596 | orchestrator | 2026-03-03 01:10:08 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:08.965368 | orchestrator | 2026-03-03 01:10:08 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:08.965440 | orchestrator | 2026-03-03 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:12.011480 | orchestrator | 2026-03-03 01:10:12 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:12.014665 | orchestrator | 2026-03-03 01:10:12 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:12.017792 | orchestrator | 2026-03-03 01:10:12 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:12.020633 | orchestrator | 2026-03-03 01:10:12 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:12.020722 | orchestrator | 2026-03-03 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:15.093005 | orchestrator | 2026-03-03 01:10:15 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:15.093967 | orchestrator | 2026-03-03 01:10:15 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:15.095036 | orchestrator | 2026-03-03 01:10:15 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:15.096521 | orchestrator | 2026-03-03 01:10:15 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:15.096567 | orchestrator | 2026-03-03 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:18.140036 | orchestrator | 2026-03-03 01:10:18 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:18.141410 | orchestrator | 2026-03-03 01:10:18 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:18.143309 | orchestrator | 2026-03-03 01:10:18 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:18.145688 | orchestrator | 2026-03-03 01:10:18 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:18.145741 | orchestrator | 2026-03-03 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:21.191802 | orchestrator | 2026-03-03 01:10:21 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:21.193758 | orchestrator | 2026-03-03 01:10:21 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:21.195508 | orchestrator | 2026-03-03 01:10:21 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:21.197096 | orchestrator | 2026-03-03 01:10:21 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:21.197140 | orchestrator | 2026-03-03 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:24.238773 | orchestrator | 2026-03-03 01:10:24 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:24.240810 | orchestrator | 2026-03-03 01:10:24 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:24.242671 | orchestrator | 2026-03-03 01:10:24 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:24.244374 | orchestrator | 2026-03-03 01:10:24 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:24.244495 | orchestrator | 2026-03-03 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:27.284063 | orchestrator | 2026-03-03 01:10:27 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:27.286283 | orchestrator | 2026-03-03 01:10:27 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:27.288271 | orchestrator | 2026-03-03 01:10:27 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:27.290311 | orchestrator | 2026-03-03 01:10:27 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:27.290385 | orchestrator | 2026-03-03 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:30.335028 | orchestrator | 2026-03-03 01:10:30 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:30.336415 | orchestrator | 2026-03-03 01:10:30 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:30.338089 | orchestrator | 2026-03-03 01:10:30 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:30.339495 | orchestrator | 2026-03-03 01:10:30 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:30.339547 | orchestrator | 2026-03-03 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:33.388804 | orchestrator | 2026-03-03 01:10:33 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:33.390549 | orchestrator | 2026-03-03 01:10:33 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:33.391157 | orchestrator | 2026-03-03 01:10:33 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:33.392108 | orchestrator | 2026-03-03 01:10:33 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:33.392160 | orchestrator | 2026-03-03 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:36.440680 | orchestrator | 2026-03-03 01:10:36 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:36.441300 | orchestrator | 2026-03-03 01:10:36 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:36.441938 | orchestrator | 2026-03-03 01:10:36 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:36.442809 | orchestrator | 2026-03-03 01:10:36 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:36.442835 | orchestrator | 2026-03-03 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:39.487054 | orchestrator | 2026-03-03 01:10:39 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:39.487318 | orchestrator | 2026-03-03 01:10:39 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:39.488291 | orchestrator | 2026-03-03 01:10:39 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:39.488818 | orchestrator | 2026-03-03 01:10:39 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:39.488849 | orchestrator | 2026-03-03 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:42.545087 | orchestrator | 2026-03-03 01:10:42 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:42.545139 | orchestrator | 2026-03-03 01:10:42 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:42.545875 | orchestrator | 2026-03-03 01:10:42 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:42.546783 | orchestrator | 2026-03-03 01:10:42 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:42.546816 | orchestrator | 2026-03-03 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:45.571948 | orchestrator | 2026-03-03 01:10:45 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:45.572331 | orchestrator | 2026-03-03 01:10:45 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:45.573253 | orchestrator | 2026-03-03 01:10:45 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:45.573953 | orchestrator | 2026-03-03 01:10:45 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:45.573975 | orchestrator | 2026-03-03 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:48.627627 | orchestrator | 2026-03-03 01:10:48 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:48.628195 | orchestrator | 2026-03-03 01:10:48 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:48.629013 | orchestrator | 2026-03-03 01:10:48 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:48.630479 | orchestrator | 2026-03-03 01:10:48 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:48.630515 | orchestrator | 2026-03-03 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:51.655960 | orchestrator | 2026-03-03 01:10:51 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:51.656374 | orchestrator | 2026-03-03 01:10:51 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:51.657405 | orchestrator | 2026-03-03 01:10:51 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:51.658483 | orchestrator | 2026-03-03 01:10:51 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:51.658506 | orchestrator | 2026-03-03 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:54.685830 | orchestrator | 2026-03-03 01:10:54 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:54.686198 | orchestrator | 2026-03-03 01:10:54 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:54.687015 | orchestrator | 2026-03-03 01:10:54 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:54.688440 | orchestrator | 2026-03-03 01:10:54 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:54.688459 | orchestrator | 2026-03-03 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:10:57.713279 | orchestrator | 2026-03-03 01:10:57 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:10:57.715283 | orchestrator | 2026-03-03 01:10:57 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:10:57.715832 | orchestrator | 2026-03-03 01:10:57 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:10:57.716372 | orchestrator | 2026-03-03 01:10:57 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:10:57.716401 | orchestrator | 2026-03-03 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:00.741921 | orchestrator | 2026-03-03 01:11:00 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:00.742110 | orchestrator | 2026-03-03 01:11:00 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:00.742745 | orchestrator | 2026-03-03 01:11:00 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:00.743350 | orchestrator | 2026-03-03 01:11:00 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:00.743364 | orchestrator | 2026-03-03 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:03.771596 | orchestrator | 2026-03-03 01:11:03 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:03.772091 | orchestrator | 2026-03-03 01:11:03 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:03.773668 | orchestrator | 2026-03-03 01:11:03 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:03.773701 | orchestrator | 2026-03-03 01:11:03 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:03.773715 | orchestrator | 2026-03-03 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:06.797064 | orchestrator | 2026-03-03 01:11:06 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:06.798373 | orchestrator | 2026-03-03 01:11:06 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:06.799041 | orchestrator | 2026-03-03 01:11:06 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:06.800341 | orchestrator | 2026-03-03 01:11:06 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:06.800390 | orchestrator | 2026-03-03 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:09.827665 | orchestrator | 2026-03-03 01:11:09 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:09.829668 | orchestrator | 2026-03-03 01:11:09 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:09.829984 | orchestrator | 2026-03-03 01:11:09 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:09.830621 | orchestrator | 2026-03-03 01:11:09 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:09.830670 | orchestrator | 2026-03-03 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:12.854594 | orchestrator | 2026-03-03 01:11:12 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:12.855820 | orchestrator | 2026-03-03 01:11:12 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:12.856426 | orchestrator | 2026-03-03 01:11:12 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:12.858379 | orchestrator | 2026-03-03 01:11:12 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:12.858418 | orchestrator | 2026-03-03 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:15.895813 | orchestrator | 2026-03-03 01:11:15 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:15.897915 | orchestrator | 2026-03-03 01:11:15 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:15.900205 | orchestrator | 2026-03-03 01:11:15 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:15.902281 | orchestrator | 2026-03-03 01:11:15 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:15.902333 | orchestrator | 2026-03-03 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:18.937876 | orchestrator | 2026-03-03 01:11:18 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:18.938096 | orchestrator | 2026-03-03 01:11:18 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:18.938904 | orchestrator | 2026-03-03 01:11:18 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:18.939612 | orchestrator | 2026-03-03 01:11:18 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:18.939660 | orchestrator | 2026-03-03 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:21.963575 | orchestrator | 2026-03-03 01:11:21 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:21.963929 | orchestrator | 2026-03-03 01:11:21 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:21.964513 | orchestrator | 2026-03-03 01:11:21 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:21.965050 | orchestrator | 2026-03-03 01:11:21 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:21.965168 | orchestrator | 2026-03-03 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:24.991926 | orchestrator | 2026-03-03 01:11:24 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:24.992312 | orchestrator | 2026-03-03 01:11:24 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:24.993662 | orchestrator | 2026-03-03 01:11:24 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:24.994081 | orchestrator | 2026-03-03 01:11:24 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:24.995062 | orchestrator | 2026-03-03 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:28.023909 | orchestrator | 2026-03-03 01:11:28 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:28.024875 | orchestrator | 2026-03-03 01:11:28 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:28.025648 | orchestrator | 2026-03-03 01:11:28 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:28.026937 | orchestrator | 2026-03-03 01:11:28 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:28.027010 | orchestrator | 2026-03-03 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:31.064913 | orchestrator | 2026-03-03 01:11:31 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:31.067700 | orchestrator | 2026-03-03 01:11:31 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:31.068193 | orchestrator | 2026-03-03 01:11:31 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:31.069081 | orchestrator | 2026-03-03 01:11:31 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:31.069120 | orchestrator | 2026-03-03 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:34.110238 | orchestrator | 2026-03-03 01:11:34 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:34.110526 | orchestrator | 2026-03-03 01:11:34 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:34.110555 | orchestrator | 2026-03-03 01:11:34 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:34.110785 | orchestrator | 2026-03-03 01:11:34 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:34.110849 | orchestrator | 2026-03-03 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:37.133934 | orchestrator | 2026-03-03 01:11:37 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:37.134496 | orchestrator | 2026-03-03 01:11:37 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:37.135303 | orchestrator | 2026-03-03 01:11:37 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:37.136212 | orchestrator | 2026-03-03 01:11:37 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:37.136243 | orchestrator | 2026-03-03 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:40.167382 | orchestrator | 2026-03-03 01:11:40 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:40.168308 | orchestrator | 2026-03-03 01:11:40 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:40.170170 | orchestrator | 2026-03-03 01:11:40 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:40.170743 | orchestrator | 2026-03-03 01:11:40 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:40.170864 | orchestrator | 2026-03-03 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:43.207950 | orchestrator | 2026-03-03 01:11:43 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:43.210399 | orchestrator | 2026-03-03 01:11:43 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:43.211833 | orchestrator | 2026-03-03 01:11:43 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:43.213819 | orchestrator | 2026-03-03 01:11:43 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:43.214463 | orchestrator | 2026-03-03 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:46.242177 | orchestrator | 2026-03-03 01:11:46 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:46.242779 | orchestrator | 2026-03-03 01:11:46 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:46.244186 | orchestrator | 2026-03-03 01:11:46 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:46.244802 | orchestrator | 2026-03-03 01:11:46 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:46.244950 | orchestrator | 2026-03-03 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:49.273112 | orchestrator | 2026-03-03 01:11:49 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:49.273807 | orchestrator | 2026-03-03 01:11:49 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:49.274729 | orchestrator | 2026-03-03 01:11:49 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:49.275777 | orchestrator | 2026-03-03 01:11:49 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:49.275804 | orchestrator | 2026-03-03 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:52.302370 | orchestrator | 2026-03-03 01:11:52 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:52.305245 | orchestrator | 2026-03-03 01:11:52 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:52.305917 | orchestrator | 2026-03-03 01:11:52 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:52.306789 | orchestrator | 2026-03-03 01:11:52 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:52.306817 | orchestrator | 2026-03-03 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:55.332171 | orchestrator | 2026-03-03 01:11:55 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:55.332454 | orchestrator | 2026-03-03 01:11:55 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:55.333121 | orchestrator | 2026-03-03 01:11:55 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:55.333597 | orchestrator | 2026-03-03 01:11:55 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:55.333838 | orchestrator | 2026-03-03 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:11:58.369202 | orchestrator | 2026-03-03 01:11:58 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:11:58.369765 | orchestrator | 2026-03-03 01:11:58 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:11:58.371395 | orchestrator | 2026-03-03 01:11:58 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:11:58.372039 | orchestrator | 2026-03-03 01:11:58 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state STARTED 2026-03-03 01:11:58.372101 | orchestrator | 2026-03-03 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:01.494919 | orchestrator | 2026-03-03 01:12:01 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:01.496744 | orchestrator | 2026-03-03 01:12:01 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:01.497433 | orchestrator | 2026-03-03 01:12:01 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:01.498958 | orchestrator | 2026-03-03 01:12:01 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:01.499807 | orchestrator | 2026-03-03 01:12:01.499839 | orchestrator | 2026-03-03 01:12:01 | INFO  | Task 85aeeb43-e191-4d35-bb63-de6d94ea1626 is in state SUCCESS 2026-03-03 01:12:01.500954 | orchestrator | 2026-03-03 01:12:01.501002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:12:01.501008 | orchestrator | 2026-03-03 01:12:01.501012 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:12:01.501015 | orchestrator | Tuesday 03 March 2026 01:10:02 +0000 (0:00:00.243) 0:00:00.243 ********* 2026-03-03 01:12:01.501019 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:12:01.501023 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:12:01.501026 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:12:01.501029 | orchestrator | 2026-03-03 01:12:01.501032 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:12:01.501036 | orchestrator | Tuesday 03 March 2026 01:10:03 +0000 (0:00:00.219) 0:00:00.463 ********* 2026-03-03 01:12:01.501039 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-03 01:12:01.501042 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-03 01:12:01.501045 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-03 01:12:01.501049 | orchestrator | 2026-03-03 01:12:01.501052 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-03 01:12:01.501055 | orchestrator | 2026-03-03 01:12:01.501058 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-03 01:12:01.501061 | orchestrator | Tuesday 03 March 2026 01:10:03 +0000 (0:00:00.305) 0:00:00.769 ********* 2026-03-03 01:12:01.501064 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:12:01.501068 | orchestrator | 2026-03-03 01:12:01.501071 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-03 01:12:01.501074 | orchestrator | Tuesday 03 March 2026 01:10:03 +0000 (0:00:00.381) 0:00:01.150 ********* 2026-03-03 01:12:01.501077 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-03 01:12:01.501080 | orchestrator | 2026-03-03 01:12:01.501089 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-03 01:12:01.501093 | orchestrator | Tuesday 03 March 2026 01:10:07 +0000 (0:00:03.192) 0:00:04.342 ********* 2026-03-03 01:12:01.501096 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-03 01:12:01.501099 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-03 01:12:01.501102 | orchestrator | 2026-03-03 01:12:01.501105 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-03 01:12:01.501108 | orchestrator | Tuesday 03 March 2026 01:10:13 +0000 (0:00:06.105) 0:00:10.447 ********* 2026-03-03 01:12:01.501112 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:12:01.501115 | orchestrator | 2026-03-03 01:12:01.501118 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-03 01:12:01.501121 | orchestrator | Tuesday 03 March 2026 01:10:16 +0000 (0:00:03.019) 0:00:13.467 ********* 2026-03-03 01:12:01.501124 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-03 01:12:01.501127 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:12:01.501131 | orchestrator | 2026-03-03 01:12:01.501134 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-03 01:12:01.501137 | orchestrator | Tuesday 03 March 2026 01:10:19 +0000 (0:00:03.534) 0:00:17.002 ********* 2026-03-03 01:12:01.501140 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:12:01.501143 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-03 01:12:01.501146 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-03 01:12:01.501149 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-03 01:12:01.501153 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-03 01:12:01.501156 | orchestrator | 2026-03-03 01:12:01.501159 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-03 01:12:01.501169 | orchestrator | Tuesday 03 March 2026 01:10:35 +0000 (0:00:15.370) 0:00:32.372 ********* 2026-03-03 01:12:01.501172 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-03 01:12:01.501175 | orchestrator | 2026-03-03 01:12:01.501178 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-03 01:12:01.501181 | orchestrator | Tuesday 03 March 2026 01:10:38 +0000 (0:00:03.782) 0:00:36.154 ********* 2026-03-03 01:12:01.501272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501360 | orchestrator | 2026-03-03 01:12:01.501365 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-03 01:12:01.501370 | orchestrator | Tuesday 03 March 2026 01:10:41 +0000 (0:00:02.620) 0:00:38.775 ********* 2026-03-03 01:12:01.501375 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-03 01:12:01.501380 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-03 01:12:01.501384 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-03 01:12:01.501389 | orchestrator | 2026-03-03 01:12:01.501393 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-03 01:12:01.501397 | orchestrator | Tuesday 03 March 2026 01:10:42 +0000 (0:00:00.831) 0:00:39.607 ********* 2026-03-03 01:12:01.501402 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.501408 | orchestrator | 2026-03-03 01:12:01.501412 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-03 01:12:01.501422 | orchestrator | Tuesday 03 March 2026 01:10:42 +0000 (0:00:00.112) 0:00:39.720 ********* 2026-03-03 01:12:01.501427 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.501432 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:12:01.501437 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:12:01.501442 | orchestrator | 2026-03-03 01:12:01.501448 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-03 01:12:01.501453 | orchestrator | Tuesday 03 March 2026 01:10:42 +0000 (0:00:00.394) 0:00:40.115 ********* 2026-03-03 01:12:01.501459 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:12:01.501463 | orchestrator | 2026-03-03 01:12:01.501466 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-03 01:12:01.501469 | orchestrator | Tuesday 03 March 2026 01:10:43 +0000 (0:00:00.483) 0:00:40.598 ********* 2026-03-03 01:12:01.501472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501531 | orchestrator | 2026-03-03 01:12:01.501535 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-03 01:12:01.501540 | orchestrator | Tuesday 03 March 2026 01:10:46 +0000 (0:00:03.332) 0:00:43.930 ********* 2026-03-03 01:12:01.501548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501569 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.501577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501593 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:12:01.501597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501607 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:12:01.501610 | orchestrator | 2026-03-03 01:12:01.501615 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-03 01:12:01.501620 | orchestrator | Tuesday 03 March 2026 01:10:48 +0000 (0:00:02.079) 0:00:46.010 ********* 2026-03-03 01:12:01.501629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501657 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.501662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501678 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:12:01.501685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501700 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:12:01.501704 | orchestrator | 2026-03-03 01:12:01.501709 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-03 01:12:01.501714 | orchestrator | Tuesday 03 March 2026 01:10:50 +0000 (0:00:01.829) 0:00:47.839 ********* 2026-03-03 01:12:01.501720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501783 | orchestrator | 2026-03-03 01:12:01.501787 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-03 01:12:01.501790 | orchestrator | Tuesday 03 March 2026 01:10:53 +0000 (0:00:02.917) 0:00:50.757 ********* 2026-03-03 01:12:01.501794 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.501797 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:12:01.501803 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:12:01.501808 | orchestrator | 2026-03-03 01:12:01.501813 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-03 01:12:01.501818 | orchestrator | Tuesday 03 March 2026 01:10:56 +0000 (0:00:02.983) 0:00:53.741 ********* 2026-03-03 01:12:01.501824 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:12:01.501828 | orchestrator | 2026-03-03 01:12:01.501834 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-03 01:12:01.501839 | orchestrator | Tuesday 03 March 2026 01:10:57 +0000 (0:00:01.439) 0:00:55.181 ********* 2026-03-03 01:12:01.501844 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.501852 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:12:01.501857 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:12:01.501862 | orchestrator | 2026-03-03 01:12:01.501868 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-03 01:12:01.501873 | orchestrator | Tuesday 03 March 2026 01:10:59 +0000 (0:00:01.378) 0:00:56.560 ********* 2026-03-03 01:12:01.501879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.501903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.501930 | orchestrator | 2026-03-03 01:12:01.501934 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-03 01:12:01.501938 | orchestrator | Tuesday 03 March 2026 01:11:09 +0000 (0:00:10.144) 0:01:06.704 ********* 2026-03-03 01:12:01.501944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501958 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.501962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501978 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:12:01.501982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-03 01:12:01.501988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:12:01.501996 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:12:01.501999 | orchestrator | 2026-03-03 01:12:01.502003 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-03 01:12:01.502007 | orchestrator | Tuesday 03 March 2026 01:11:10 +0000 (0:00:01.066) 0:01:07.771 ********* 2026-03-03 01:12:01.502034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.502046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.502052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-03 01:12:01.502056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.502060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.502064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.502071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.502079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.502083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:12:01.502087 | orchestrator | 2026-03-03 01:12:01.502091 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-03 01:12:01.502095 | orchestrator | Tuesday 03 March 2026 01:11:13 +0000 (0:00:03.400) 0:01:11.172 ********* 2026-03-03 01:12:01.502098 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:12:01.502102 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:12:01.502106 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:12:01.502110 | orchestrator | 2026-03-03 01:12:01.502115 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-03 01:12:01.502119 | orchestrator | Tuesday 03 March 2026 01:11:14 +0000 (0:00:00.889) 0:01:12.061 ********* 2026-03-03 01:12:01.502123 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.502127 | orchestrator | 2026-03-03 01:12:01.502131 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-03 01:12:01.502134 | orchestrator | Tuesday 03 March 2026 01:11:17 +0000 (0:00:02.987) 0:01:15.048 ********* 2026-03-03 01:12:01.502138 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.502142 | orchestrator | 2026-03-03 01:12:01.502146 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-03 01:12:01.502149 | orchestrator | Tuesday 03 March 2026 01:11:20 +0000 (0:00:02.987) 0:01:18.036 ********* 2026-03-03 01:12:01.502153 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.502157 | orchestrator | 2026-03-03 01:12:01.502160 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-03 01:12:01.502164 | orchestrator | Tuesday 03 March 2026 01:11:32 +0000 (0:00:11.562) 0:01:29.599 ********* 2026-03-03 01:12:01.502170 | orchestrator | 2026-03-03 01:12:01.502174 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-03 01:12:01.502178 | orchestrator | Tuesday 03 March 2026 01:11:32 +0000 (0:00:00.050) 0:01:29.649 ********* 2026-03-03 01:12:01.502182 | orchestrator | 2026-03-03 01:12:01.502185 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-03 01:12:01.502189 | orchestrator | Tuesday 03 March 2026 01:11:32 +0000 (0:00:00.049) 0:01:29.698 ********* 2026-03-03 01:12:01.502193 | orchestrator | 2026-03-03 01:12:01.502197 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-03 01:12:01.502201 | orchestrator | Tuesday 03 March 2026 01:11:32 +0000 (0:00:00.052) 0:01:29.751 ********* 2026-03-03 01:12:01.502204 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.502208 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:12:01.502212 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:12:01.502216 | orchestrator | 2026-03-03 01:12:01.502220 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-03 01:12:01.502223 | orchestrator | Tuesday 03 March 2026 01:11:44 +0000 (0:00:12.210) 0:01:41.962 ********* 2026-03-03 01:12:01.502226 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.502230 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:12:01.502234 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:12:01.502237 | orchestrator | 2026-03-03 01:12:01.502240 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-03 01:12:01.502243 | orchestrator | Tuesday 03 March 2026 01:11:49 +0000 (0:00:05.183) 0:01:47.145 ********* 2026-03-03 01:12:01.502247 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:12:01.502250 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:12:01.502253 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:12:01.502256 | orchestrator | 2026-03-03 01:12:01.502260 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:12:01.502263 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:12:01.502268 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-03 01:12:01.502271 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-03 01:12:01.502274 | orchestrator | 2026-03-03 01:12:01.502277 | orchestrator | 2026-03-03 01:12:01.502281 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:12:01.502284 | orchestrator | Tuesday 03 March 2026 01:11:58 +0000 (0:00:08.445) 0:01:55.591 ********* 2026-03-03 01:12:01.502287 | orchestrator | =============================================================================== 2026-03-03 01:12:01.502290 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.37s 2026-03-03 01:12:01.502296 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.21s 2026-03-03 01:12:01.502299 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.56s 2026-03-03 01:12:01.502303 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.14s 2026-03-03 01:12:01.502306 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.45s 2026-03-03 01:12:01.502309 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.11s 2026-03-03 01:12:01.502354 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.18s 2026-03-03 01:12:01.502360 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.78s 2026-03-03 01:12:01.502363 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.53s 2026-03-03 01:12:01.502366 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.40s 2026-03-03 01:12:01.502369 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.33s 2026-03-03 01:12:01.502376 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.19s 2026-03-03 01:12:01.502379 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.02s 2026-03-03 01:12:01.502382 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.99s 2026-03-03 01:12:01.502386 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.99s 2026-03-03 01:12:01.502389 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.98s 2026-03-03 01:12:01.502392 | orchestrator | barbican : Copying over config.json files for services ------------------ 2.92s 2026-03-03 01:12:01.502398 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.62s 2026-03-03 01:12:01.502401 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.08s 2026-03-03 01:12:01.502404 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.83s 2026-03-03 01:12:01.502408 | orchestrator | 2026-03-03 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:04.527638 | orchestrator | 2026-03-03 01:12:04 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:04.529939 | orchestrator | 2026-03-03 01:12:04 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:04.529992 | orchestrator | 2026-03-03 01:12:04 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:04.530000 | orchestrator | 2026-03-03 01:12:04 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:04.530004 | orchestrator | 2026-03-03 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:07.554623 | orchestrator | 2026-03-03 01:12:07 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:07.555042 | orchestrator | 2026-03-03 01:12:07 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:07.555669 | orchestrator | 2026-03-03 01:12:07 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:07.556218 | orchestrator | 2026-03-03 01:12:07 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:07.556275 | orchestrator | 2026-03-03 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:10.578664 | orchestrator | 2026-03-03 01:12:10 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:10.578953 | orchestrator | 2026-03-03 01:12:10 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:10.579650 | orchestrator | 2026-03-03 01:12:10 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:10.580627 | orchestrator | 2026-03-03 01:12:10 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:10.580758 | orchestrator | 2026-03-03 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:13.610204 | orchestrator | 2026-03-03 01:12:13 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:13.610492 | orchestrator | 2026-03-03 01:12:13 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:13.611256 | orchestrator | 2026-03-03 01:12:13 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:13.612030 | orchestrator | 2026-03-03 01:12:13 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:13.612056 | orchestrator | 2026-03-03 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:16.640692 | orchestrator | 2026-03-03 01:12:16 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:16.640763 | orchestrator | 2026-03-03 01:12:16 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:16.641403 | orchestrator | 2026-03-03 01:12:16 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:16.641913 | orchestrator | 2026-03-03 01:12:16 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:16.641932 | orchestrator | 2026-03-03 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:19.677802 | orchestrator | 2026-03-03 01:12:19 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:19.678110 | orchestrator | 2026-03-03 01:12:19 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:19.678883 | orchestrator | 2026-03-03 01:12:19 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:19.679537 | orchestrator | 2026-03-03 01:12:19 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:19.679570 | orchestrator | 2026-03-03 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:22.706099 | orchestrator | 2026-03-03 01:12:22 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:22.706399 | orchestrator | 2026-03-03 01:12:22 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:22.706933 | orchestrator | 2026-03-03 01:12:22 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:22.707603 | orchestrator | 2026-03-03 01:12:22 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:22.708253 | orchestrator | 2026-03-03 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:25.746773 | orchestrator | 2026-03-03 01:12:25 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:25.748642 | orchestrator | 2026-03-03 01:12:25 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:25.752009 | orchestrator | 2026-03-03 01:12:25 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:25.752637 | orchestrator | 2026-03-03 01:12:25 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:25.752648 | orchestrator | 2026-03-03 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:28.784662 | orchestrator | 2026-03-03 01:12:28 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:28.786327 | orchestrator | 2026-03-03 01:12:28 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:28.787118 | orchestrator | 2026-03-03 01:12:28 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:28.788538 | orchestrator | 2026-03-03 01:12:28 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:28.788581 | orchestrator | 2026-03-03 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:31.835853 | orchestrator | 2026-03-03 01:12:31 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:31.836803 | orchestrator | 2026-03-03 01:12:31 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:31.838165 | orchestrator | 2026-03-03 01:12:31 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:31.839888 | orchestrator | 2026-03-03 01:12:31 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:31.839952 | orchestrator | 2026-03-03 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:34.885461 | orchestrator | 2026-03-03 01:12:34 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:34.886044 | orchestrator | 2026-03-03 01:12:34 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:34.888699 | orchestrator | 2026-03-03 01:12:34 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:34.889595 | orchestrator | 2026-03-03 01:12:34 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:34.889641 | orchestrator | 2026-03-03 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:37.927204 | orchestrator | 2026-03-03 01:12:37 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:37.927990 | orchestrator | 2026-03-03 01:12:37 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:37.929269 | orchestrator | 2026-03-03 01:12:37 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:37.930581 | orchestrator | 2026-03-03 01:12:37 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:37.930641 | orchestrator | 2026-03-03 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:40.969987 | orchestrator | 2026-03-03 01:12:40 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:40.971876 | orchestrator | 2026-03-03 01:12:40 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:40.973481 | orchestrator | 2026-03-03 01:12:40 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:40.975296 | orchestrator | 2026-03-03 01:12:40 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:40.975333 | orchestrator | 2026-03-03 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:44.028459 | orchestrator | 2026-03-03 01:12:44 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:44.028847 | orchestrator | 2026-03-03 01:12:44 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state STARTED 2026-03-03 01:12:44.029904 | orchestrator | 2026-03-03 01:12:44 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:44.047047 | orchestrator | 2026-03-03 01:12:44 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:44.047103 | orchestrator | 2026-03-03 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:47.085443 | orchestrator | 2026-03-03 01:12:47 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:47.087147 | orchestrator | 2026-03-03 01:12:47 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:12:47.087931 | orchestrator | 2026-03-03 01:12:47 | INFO  | Task ca06f2d1-c49f-4bd3-98fe-0ee84927cfdc is in state SUCCESS 2026-03-03 01:12:47.088998 | orchestrator | 2026-03-03 01:12:47 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:47.090062 | orchestrator | 2026-03-03 01:12:47 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:47.090237 | orchestrator | 2026-03-03 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:50.135479 | orchestrator | 2026-03-03 01:12:50 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:50.135644 | orchestrator | 2026-03-03 01:12:50 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:12:50.136676 | orchestrator | 2026-03-03 01:12:50 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:50.138678 | orchestrator | 2026-03-03 01:12:50 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:50.138719 | orchestrator | 2026-03-03 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:53.175008 | orchestrator | 2026-03-03 01:12:53 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:53.175472 | orchestrator | 2026-03-03 01:12:53 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:12:53.177576 | orchestrator | 2026-03-03 01:12:53 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:53.178274 | orchestrator | 2026-03-03 01:12:53 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:53.178298 | orchestrator | 2026-03-03 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:56.213451 | orchestrator | 2026-03-03 01:12:56 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:56.213641 | orchestrator | 2026-03-03 01:12:56 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:12:56.214495 | orchestrator | 2026-03-03 01:12:56 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:56.215562 | orchestrator | 2026-03-03 01:12:56 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:56.215785 | orchestrator | 2026-03-03 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:12:59.241761 | orchestrator | 2026-03-03 01:12:59 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:12:59.242322 | orchestrator | 2026-03-03 01:12:59 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:12:59.245158 | orchestrator | 2026-03-03 01:12:59 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:12:59.245969 | orchestrator | 2026-03-03 01:12:59 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:12:59.245999 | orchestrator | 2026-03-03 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:02.279029 | orchestrator | 2026-03-03 01:13:02 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:13:02.279512 | orchestrator | 2026-03-03 01:13:02 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:02.281726 | orchestrator | 2026-03-03 01:13:02 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:02.283168 | orchestrator | 2026-03-03 01:13:02 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:02.283212 | orchestrator | 2026-03-03 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:05.314749 | orchestrator | 2026-03-03 01:13:05 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:13:05.314815 | orchestrator | 2026-03-03 01:13:05 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:05.315310 | orchestrator | 2026-03-03 01:13:05 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:05.316553 | orchestrator | 2026-03-03 01:13:05 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:05.316575 | orchestrator | 2026-03-03 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:08.338258 | orchestrator | 2026-03-03 01:13:08 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:13:08.339550 | orchestrator | 2026-03-03 01:13:08 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:08.341856 | orchestrator | 2026-03-03 01:13:08 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:08.344137 | orchestrator | 2026-03-03 01:13:08 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:08.344200 | orchestrator | 2026-03-03 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:11.376502 | orchestrator | 2026-03-03 01:13:11 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state STARTED 2026-03-03 01:13:11.376691 | orchestrator | 2026-03-03 01:13:11 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:11.377949 | orchestrator | 2026-03-03 01:13:11 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:11.378464 | orchestrator | 2026-03-03 01:13:11 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:11.378501 | orchestrator | 2026-03-03 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:14.410337 | orchestrator | 2026-03-03 01:13:14.410491 | orchestrator | 2026-03-03 01:13:14.410502 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-03 01:13:14.410508 | orchestrator | 2026-03-03 01:13:14.410515 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-03 01:13:14.410521 | orchestrator | Tuesday 03 March 2026 01:12:06 +0000 (0:00:00.074) 0:00:00.074 ********* 2026-03-03 01:13:14.410527 | orchestrator | changed: [localhost] 2026-03-03 01:13:14.410535 | orchestrator | 2026-03-03 01:13:14.410540 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-03 01:13:14.410545 | orchestrator | Tuesday 03 March 2026 01:12:07 +0000 (0:00:01.481) 0:00:01.556 ********* 2026-03-03 01:13:14.410552 | orchestrator | changed: [localhost] 2026-03-03 01:13:14.410563 | orchestrator | 2026-03-03 01:13:14.410569 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-03 01:13:14.410574 | orchestrator | Tuesday 03 March 2026 01:12:39 +0000 (0:00:31.052) 0:00:32.608 ********* 2026-03-03 01:13:14.410580 | orchestrator | changed: [localhost] 2026-03-03 01:13:14.410586 | orchestrator | 2026-03-03 01:13:14.410592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:13:14.410597 | orchestrator | 2026-03-03 01:13:14.410603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:13:14.410609 | orchestrator | Tuesday 03 March 2026 01:12:43 +0000 (0:00:04.949) 0:00:37.558 ********* 2026-03-03 01:13:14.410738 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:13:14.410745 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:13:14.410751 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:13:14.410757 | orchestrator | 2026-03-03 01:13:14.410763 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:13:14.410768 | orchestrator | Tuesday 03 March 2026 01:12:44 +0000 (0:00:00.449) 0:00:38.008 ********* 2026-03-03 01:13:14.410774 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-03 01:13:14.410780 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-03 01:13:14.411074 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-03 01:13:14.411092 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-03 01:13:14.411097 | orchestrator | 2026-03-03 01:13:14.411101 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-03 01:13:14.411105 | orchestrator | skipping: no hosts matched 2026-03-03 01:13:14.411109 | orchestrator | 2026-03-03 01:13:14.411115 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:13:14.411133 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:13:14.411142 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:13:14.411148 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:13:14.411153 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:13:14.411157 | orchestrator | 2026-03-03 01:13:14.411162 | orchestrator | 2026-03-03 01:13:14.411167 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:13:14.411171 | orchestrator | Tuesday 03 March 2026 01:12:45 +0000 (0:00:00.630) 0:00:38.638 ********* 2026-03-03 01:13:14.411176 | orchestrator | =============================================================================== 2026-03-03 01:13:14.411182 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.05s 2026-03-03 01:13:14.411187 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.95s 2026-03-03 01:13:14.411192 | orchestrator | Ensure the destination directory exists --------------------------------- 1.48s 2026-03-03 01:13:14.411200 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-03-03 01:13:14.411206 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-03-03 01:13:14.411351 | orchestrator | 2026-03-03 01:13:14.411355 | orchestrator | 2026-03-03 01:13:14.411358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:13:14.411361 | orchestrator | 2026-03-03 01:13:14.411364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:13:14.411367 | orchestrator | Tuesday 03 March 2026 01:10:07 +0000 (0:00:00.260) 0:00:00.260 ********* 2026-03-03 01:13:14.411468 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:13:14.411477 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:13:14.411481 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:13:14.411486 | orchestrator | 2026-03-03 01:13:14.411491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:13:14.411496 | orchestrator | Tuesday 03 March 2026 01:10:07 +0000 (0:00:00.297) 0:00:00.557 ********* 2026-03-03 01:13:14.411501 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-03 01:13:14.411507 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-03 01:13:14.411512 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-03 01:13:14.411517 | orchestrator | 2026-03-03 01:13:14.411522 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-03 01:13:14.411527 | orchestrator | 2026-03-03 01:13:14.411532 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-03 01:13:14.411538 | orchestrator | Tuesday 03 March 2026 01:10:08 +0000 (0:00:00.497) 0:00:01.054 ********* 2026-03-03 01:13:14.411543 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:13:14.411549 | orchestrator | 2026-03-03 01:13:14.411554 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-03 01:13:14.411559 | orchestrator | Tuesday 03 March 2026 01:10:09 +0000 (0:00:00.632) 0:00:01.687 ********* 2026-03-03 01:13:14.411585 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-03 01:13:14.411678 | orchestrator | 2026-03-03 01:13:14.411684 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-03 01:13:14.411689 | orchestrator | Tuesday 03 March 2026 01:10:12 +0000 (0:00:03.362) 0:00:05.050 ********* 2026-03-03 01:13:14.411694 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-03 01:13:14.411708 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-03 01:13:14.411713 | orchestrator | 2026-03-03 01:13:14.411718 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-03 01:13:14.411723 | orchestrator | Tuesday 03 March 2026 01:10:18 +0000 (0:00:06.177) 0:00:11.227 ********* 2026-03-03 01:13:14.411728 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:13:14.411734 | orchestrator | 2026-03-03 01:13:14.411739 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-03 01:13:14.411744 | orchestrator | Tuesday 03 March 2026 01:10:21 +0000 (0:00:02.983) 0:00:14.211 ********* 2026-03-03 01:13:14.411749 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-03 01:13:14.411755 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:13:14.411760 | orchestrator | 2026-03-03 01:13:14.411765 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-03 01:13:14.411770 | orchestrator | Tuesday 03 March 2026 01:10:25 +0000 (0:00:03.731) 0:00:17.943 ********* 2026-03-03 01:13:14.411774 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:13:14.411779 | orchestrator | 2026-03-03 01:13:14.411784 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-03 01:13:14.411790 | orchestrator | Tuesday 03 March 2026 01:10:28 +0000 (0:00:03.553) 0:00:21.496 ********* 2026-03-03 01:13:14.411795 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-03 01:13:14.411800 | orchestrator | 2026-03-03 01:13:14.411806 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-03 01:13:14.411811 | orchestrator | Tuesday 03 March 2026 01:10:32 +0000 (0:00:03.768) 0:00:25.265 ********* 2026-03-03 01:13:14.411818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.411831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.411859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.411871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.411992 | orchestrator | 2026-03-03 01:13:14.411997 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-03 01:13:14.412003 | orchestrator | Tuesday 03 March 2026 01:10:36 +0000 (0:00:03.775) 0:00:29.040 ********* 2026-03-03 01:13:14.412008 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.412013 | orchestrator | 2026-03-03 01:13:14.412018 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-03 01:13:14.412022 | orchestrator | Tuesday 03 March 2026 01:10:36 +0000 (0:00:00.274) 0:00:29.315 ********* 2026-03-03 01:13:14.412027 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.412032 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:13:14.412037 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:13:14.412043 | orchestrator | 2026-03-03 01:13:14.412048 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-03 01:13:14.412054 | orchestrator | Tuesday 03 March 2026 01:10:37 +0000 (0:00:00.471) 0:00:29.787 ********* 2026-03-03 01:13:14.412059 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:13:14.412064 | orchestrator | 2026-03-03 01:13:14.412069 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-03 01:13:14.412074 | orchestrator | Tuesday 03 March 2026 01:10:37 +0000 (0:00:00.850) 0:00:30.638 ********* 2026-03-03 01:13:14.412080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412280 | orchestrator | 2026-03-03 01:13:14.412286 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-03 01:13:14.412291 | orchestrator | Tuesday 03 March 2026 01:10:45 +0000 (0:00:07.058) 0:00:37.696 ********* 2026-03-03 01:13:14.412296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.412309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.412316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-pro2026-03-03 01:13:14 | INFO  | Task f097c36a-0c53-4245-bdce-469d1d0c8e2b is in state SUCCESS 2026-03-03 01:13:14.412350 | orchestrator | ducer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412360 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.412366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.412377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.412403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412452 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:13:14.412458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.412468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.412477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412517 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:13:14.412523 | orchestrator | 2026-03-03 01:13:14.412528 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-03 01:13:14.412534 | orchestrator | Tuesday 03 March 2026 01:10:46 +0000 (0:00:01.309) 0:00:39.005 ********* 2026-03-03 01:13:14.412539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.412549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.412558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412598 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.412604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.412613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.412621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412659 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:13:14.412664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.412673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.412681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.412718 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:13:14.412724 | orchestrator | 2026-03-03 01:13:14.412729 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-03 01:13:14.412734 | orchestrator | Tuesday 03 March 2026 01:10:48 +0000 (0:00:02.342) 0:00:41.348 ********* 2026-03-03 01:13:14.412743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412909 | orchestrator | 2026-03-03 01:13:14.412914 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-03 01:13:14.412924 | orchestrator | Tuesday 03 March 2026 01:10:55 +0000 (0:00:06.323) 0:00:47.672 ********* 2026-03-03 01:13:14.412928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.412942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.412996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413053 | orchestrator | 2026-03-03 01:13:14.413057 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-03 01:13:14.413061 | orchestrator | Tuesday 03 March 2026 01:11:16 +0000 (0:00:21.742) 0:01:09.414 ********* 2026-03-03 01:13:14.413065 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-03 01:13:14.413068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-03 01:13:14.413077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-03 01:13:14.413080 | orchestrator | 2026-03-03 01:13:14.413084 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-03 01:13:14.413087 | orchestrator | Tuesday 03 March 2026 01:11:24 +0000 (0:00:07.269) 0:01:16.684 ********* 2026-03-03 01:13:14.413091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-03 01:13:14.413094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-03 01:13:14.413098 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-03 01:13:14.413101 | orchestrator | 2026-03-03 01:13:14.413105 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-03 01:13:14.413108 | orchestrator | Tuesday 03 March 2026 01:11:27 +0000 (0:00:03.019) 0:01:19.703 ********* 2026-03-03 01:13:14.413112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413205 | orchestrator | 2026-03-03 01:13:14.413209 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-03 01:13:14.413212 | orchestrator | Tuesday 03 March 2026 01:11:30 +0000 (0:00:03.536) 0:01:23.240 ********* 2026-03-03 01:13:14.413216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413420 | orchestrator | 2026-03-03 01:13:14.413423 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-03 01:13:14.413427 | orchestrator | Tuesday 03 March 2026 01:11:34 +0000 (0:00:03.615) 0:01:26.855 ********* 2026-03-03 01:13:14.413430 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.413433 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:13:14.413436 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:13:14.413439 | orchestrator | 2026-03-03 01:13:14.413442 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-03 01:13:14.413449 | orchestrator | Tuesday 03 March 2026 01:11:34 +0000 (0:00:00.563) 0:01:27.418 ********* 2026-03-03 01:13:14.413452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.413459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413475 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:13:14.413515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.413523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413541 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.413547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-03 01:13:14.413551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-03 01:13:14.413554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:13:14.413571 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:13:14.413575 | orchestrator | 2026-03-03 01:13:14.413578 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-03 01:13:14.413581 | orchestrator | Tuesday 03 March 2026 01:11:35 +0000 (0:00:00.935) 0:01:28.353 ********* 2026-03-03 01:13:14.413587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.413590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.413599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-03 01:13:14.413607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:13:14.413706 | orchestrator | 2026-03-03 01:13:14.413712 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-03 01:13:14.413717 | orchestrator | Tuesday 03 March 2026 01:11:40 +0000 (0:00:04.536) 0:01:32.890 ********* 2026-03-03 01:13:14.413723 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:13:14.413728 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:13:14.413733 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:13:14.413738 | orchestrator | 2026-03-03 01:13:14.413743 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-03 01:13:14.413749 | orchestrator | Tuesday 03 March 2026 01:11:40 +0000 (0:00:00.348) 0:01:33.238 ********* 2026-03-03 01:13:14.413754 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-03 01:13:14.413759 | orchestrator | 2026-03-03 01:13:14.413764 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-03 01:13:14.413769 | orchestrator | Tuesday 03 March 2026 01:11:42 +0000 (0:00:01.824) 0:01:35.063 ********* 2026-03-03 01:13:14.413775 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:13:14.413780 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-03 01:13:14.413785 | orchestrator | 2026-03-03 01:13:14.413790 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-03 01:13:14.413795 | orchestrator | Tuesday 03 March 2026 01:11:44 +0000 (0:00:02.098) 0:01:37.162 ********* 2026-03-03 01:13:14.413803 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.413808 | orchestrator | 2026-03-03 01:13:14.413813 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-03 01:13:14.413821 | orchestrator | Tuesday 03 March 2026 01:12:00 +0000 (0:00:15.722) 0:01:52.885 ********* 2026-03-03 01:13:14.413826 | orchestrator | 2026-03-03 01:13:14.413831 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-03 01:13:14.413836 | orchestrator | Tuesday 03 March 2026 01:12:00 +0000 (0:00:00.125) 0:01:53.011 ********* 2026-03-03 01:13:14.413841 | orchestrator | 2026-03-03 01:13:14.413847 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-03 01:13:14.413852 | orchestrator | Tuesday 03 March 2026 01:12:00 +0000 (0:00:00.166) 0:01:53.177 ********* 2026-03-03 01:13:14.413857 | orchestrator | 2026-03-03 01:13:14.413862 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-03 01:13:14.413868 | orchestrator | Tuesday 03 March 2026 01:12:00 +0000 (0:00:00.167) 0:01:53.345 ********* 2026-03-03 01:13:14.413873 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:13:14.413878 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.413883 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:13:14.413889 | orchestrator | 2026-03-03 01:13:14.413894 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-03 01:13:14.413899 | orchestrator | Tuesday 03 March 2026 01:12:13 +0000 (0:00:13.286) 0:02:06.632 ********* 2026-03-03 01:13:14.413904 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:13:14.413909 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.413914 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:13:14.413919 | orchestrator | 2026-03-03 01:13:14.413924 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-03 01:13:14.413929 | orchestrator | Tuesday 03 March 2026 01:12:25 +0000 (0:00:11.034) 0:02:17.666 ********* 2026-03-03 01:13:14.413935 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:13:14.413940 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.413945 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:13:14.413950 | orchestrator | 2026-03-03 01:13:14.413956 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-03 01:13:14.413961 | orchestrator | Tuesday 03 March 2026 01:12:35 +0000 (0:00:10.608) 0:02:28.274 ********* 2026-03-03 01:13:14.413966 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.413971 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:13:14.413976 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:13:14.413981 | orchestrator | 2026-03-03 01:13:14.413986 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-03 01:13:14.413992 | orchestrator | Tuesday 03 March 2026 01:12:46 +0000 (0:00:10.658) 0:02:38.933 ********* 2026-03-03 01:13:14.413997 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.414002 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:13:14.414007 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:13:14.414060 | orchestrator | 2026-03-03 01:13:14.414067 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-03 01:13:14.414073 | orchestrator | Tuesday 03 March 2026 01:12:53 +0000 (0:00:06.874) 0:02:45.808 ********* 2026-03-03 01:13:14.414079 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.414085 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:13:14.414090 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:13:14.414096 | orchestrator | 2026-03-03 01:13:14.414102 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-03 01:13:14.414108 | orchestrator | Tuesday 03 March 2026 01:13:03 +0000 (0:00:10.195) 0:02:56.003 ********* 2026-03-03 01:13:14.414114 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:13:14.414120 | orchestrator | 2026-03-03 01:13:14.414126 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:13:14.414132 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:13:14.414139 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-03 01:13:14.414151 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-03 01:13:14.414157 | orchestrator | 2026-03-03 01:13:14.414162 | orchestrator | 2026-03-03 01:13:14.414168 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:13:14.414173 | orchestrator | Tuesday 03 March 2026 01:13:11 +0000 (0:00:08.328) 0:03:04.331 ********* 2026-03-03 01:13:14.414179 | orchestrator | =============================================================================== 2026-03-03 01:13:14.414184 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.74s 2026-03-03 01:13:14.414190 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.72s 2026-03-03 01:13:14.414196 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.28s 2026-03-03 01:13:14.414202 | orchestrator | designate : Restart designate-api container ---------------------------- 11.03s 2026-03-03 01:13:14.414207 | orchestrator | designate : Restart designate-producer container ----------------------- 10.66s 2026-03-03 01:13:14.414213 | orchestrator | designate : Restart designate-central container ------------------------ 10.61s 2026-03-03 01:13:14.414219 | orchestrator | designate : Restart designate-worker container ------------------------- 10.19s 2026-03-03 01:13:14.414224 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.33s 2026-03-03 01:13:14.414230 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.27s 2026-03-03 01:13:14.414236 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.06s 2026-03-03 01:13:14.414242 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.87s 2026-03-03 01:13:14.414253 | orchestrator | designate : Copying over config.json files for services ----------------- 6.32s 2026-03-03 01:13:14.414258 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.18s 2026-03-03 01:13:14.414262 | orchestrator | designate : Check designate containers ---------------------------------- 4.54s 2026-03-03 01:13:14.414265 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.78s 2026-03-03 01:13:14.414269 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.77s 2026-03-03 01:13:14.414272 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.73s 2026-03-03 01:13:14.414276 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.62s 2026-03-03 01:13:14.414280 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.55s 2026-03-03 01:13:14.414284 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.54s 2026-03-03 01:13:14.414287 | orchestrator | 2026-03-03 01:13:14 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:14.414291 | orchestrator | 2026-03-03 01:13:14 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:14.414295 | orchestrator | 2026-03-03 01:13:14 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:14.414299 | orchestrator | 2026-03-03 01:13:14 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:14.414304 | orchestrator | 2026-03-03 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:17.448792 | orchestrator | 2026-03-03 01:13:17 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:17.449661 | orchestrator | 2026-03-03 01:13:17 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:17.451070 | orchestrator | 2026-03-03 01:13:17 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:17.451720 | orchestrator | 2026-03-03 01:13:17 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:17.451752 | orchestrator | 2026-03-03 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:20.481204 | orchestrator | 2026-03-03 01:13:20 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:20.482718 | orchestrator | 2026-03-03 01:13:20 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:20.483205 | orchestrator | 2026-03-03 01:13:20 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:20.483678 | orchestrator | 2026-03-03 01:13:20 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:20.483709 | orchestrator | 2026-03-03 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:23.506210 | orchestrator | 2026-03-03 01:13:23 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:23.506946 | orchestrator | 2026-03-03 01:13:23 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:23.507634 | orchestrator | 2026-03-03 01:13:23 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:23.508303 | orchestrator | 2026-03-03 01:13:23 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:23.508481 | orchestrator | 2026-03-03 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:26.539091 | orchestrator | 2026-03-03 01:13:26 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:26.539325 | orchestrator | 2026-03-03 01:13:26 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:26.540002 | orchestrator | 2026-03-03 01:13:26 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:26.540584 | orchestrator | 2026-03-03 01:13:26 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:26.540616 | orchestrator | 2026-03-03 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:29.559563 | orchestrator | 2026-03-03 01:13:29 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:29.559833 | orchestrator | 2026-03-03 01:13:29 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:29.560591 | orchestrator | 2026-03-03 01:13:29 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:29.561310 | orchestrator | 2026-03-03 01:13:29 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:29.561344 | orchestrator | 2026-03-03 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:32.590215 | orchestrator | 2026-03-03 01:13:32 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:32.592170 | orchestrator | 2026-03-03 01:13:32 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:32.593732 | orchestrator | 2026-03-03 01:13:32 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:32.595295 | orchestrator | 2026-03-03 01:13:32 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:32.595342 | orchestrator | 2026-03-03 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:35.631298 | orchestrator | 2026-03-03 01:13:35 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:35.632789 | orchestrator | 2026-03-03 01:13:35 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:35.636211 | orchestrator | 2026-03-03 01:13:35 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:35.638720 | orchestrator | 2026-03-03 01:13:35 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:35.638934 | orchestrator | 2026-03-03 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:38.673240 | orchestrator | 2026-03-03 01:13:38 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:38.674996 | orchestrator | 2026-03-03 01:13:38 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:38.676534 | orchestrator | 2026-03-03 01:13:38 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:38.678184 | orchestrator | 2026-03-03 01:13:38 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:38.678236 | orchestrator | 2026-03-03 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:41.726197 | orchestrator | 2026-03-03 01:13:41 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:41.726259 | orchestrator | 2026-03-03 01:13:41 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:41.727298 | orchestrator | 2026-03-03 01:13:41 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:41.727960 | orchestrator | 2026-03-03 01:13:41 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:41.727995 | orchestrator | 2026-03-03 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:44.764293 | orchestrator | 2026-03-03 01:13:44 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:44.770189 | orchestrator | 2026-03-03 01:13:44 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:44.773569 | orchestrator | 2026-03-03 01:13:44 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:44.775544 | orchestrator | 2026-03-03 01:13:44 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:44.775757 | orchestrator | 2026-03-03 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:47.815762 | orchestrator | 2026-03-03 01:13:47 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:47.816831 | orchestrator | 2026-03-03 01:13:47 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:47.818927 | orchestrator | 2026-03-03 01:13:47 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:47.822811 | orchestrator | 2026-03-03 01:13:47 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:47.823058 | orchestrator | 2026-03-03 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:50.878317 | orchestrator | 2026-03-03 01:13:50 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:50.881128 | orchestrator | 2026-03-03 01:13:50 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:50.882910 | orchestrator | 2026-03-03 01:13:50 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:50.884164 | orchestrator | 2026-03-03 01:13:50 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:50.884220 | orchestrator | 2026-03-03 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:53.930827 | orchestrator | 2026-03-03 01:13:53 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:53.932723 | orchestrator | 2026-03-03 01:13:53 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:53.933349 | orchestrator | 2026-03-03 01:13:53 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:53.934297 | orchestrator | 2026-03-03 01:13:53 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:53.934463 | orchestrator | 2026-03-03 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:13:56.969511 | orchestrator | 2026-03-03 01:13:56 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:13:56.969846 | orchestrator | 2026-03-03 01:13:56 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:13:56.970754 | orchestrator | 2026-03-03 01:13:56 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:13:56.971688 | orchestrator | 2026-03-03 01:13:56 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:13:56.971702 | orchestrator | 2026-03-03 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:00.021936 | orchestrator | 2026-03-03 01:14:00 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state STARTED 2026-03-03 01:14:00.024235 | orchestrator | 2026-03-03 01:14:00 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:00.025672 | orchestrator | 2026-03-03 01:14:00 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:14:00.027058 | orchestrator | 2026-03-03 01:14:00 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:00.027106 | orchestrator | 2026-03-03 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:03.081375 | orchestrator | 2026-03-03 01:14:03.081528 | orchestrator | 2026-03-03 01:14:03.081539 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:14:03.081545 | orchestrator | 2026-03-03 01:14:03.081551 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:14:03.081556 | orchestrator | Tuesday 03 March 2026 01:12:51 +0000 (0:00:00.190) 0:00:00.190 ********* 2026-03-03 01:14:03.081560 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:03.081564 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:03.081567 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:03.081572 | orchestrator | 2026-03-03 01:14:03.081578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:14:03.081584 | orchestrator | Tuesday 03 March 2026 01:12:51 +0000 (0:00:00.218) 0:00:00.408 ********* 2026-03-03 01:14:03.081590 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-03 01:14:03.081596 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-03 01:14:03.081602 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-03 01:14:03.081608 | orchestrator | 2026-03-03 01:14:03.081614 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-03 01:14:03.081620 | orchestrator | 2026-03-03 01:14:03.081626 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-03 01:14:03.081632 | orchestrator | Tuesday 03 March 2026 01:12:51 +0000 (0:00:00.288) 0:00:00.697 ********* 2026-03-03 01:14:03.081638 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:14:03.081652 | orchestrator | 2026-03-03 01:14:03.081677 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-03 01:14:03.081681 | orchestrator | Tuesday 03 March 2026 01:12:52 +0000 (0:00:00.436) 0:00:01.133 ********* 2026-03-03 01:14:03.081684 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-03 01:14:03.081701 | orchestrator | 2026-03-03 01:14:03.081708 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-03 01:14:03.081713 | orchestrator | Tuesday 03 March 2026 01:12:55 +0000 (0:00:03.364) 0:00:04.498 ********* 2026-03-03 01:14:03.081718 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-03 01:14:03.081722 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-03 01:14:03.081725 | orchestrator | 2026-03-03 01:14:03.081728 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-03 01:14:03.081732 | orchestrator | Tuesday 03 March 2026 01:13:01 +0000 (0:00:05.757) 0:00:10.255 ********* 2026-03-03 01:14:03.081735 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:14:03.081738 | orchestrator | 2026-03-03 01:14:03.081742 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-03 01:14:03.081745 | orchestrator | Tuesday 03 March 2026 01:13:04 +0000 (0:00:03.085) 0:00:13.340 ********* 2026-03-03 01:14:03.081748 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-03 01:14:03.081751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:14:03.081754 | orchestrator | 2026-03-03 01:14:03.081758 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-03 01:14:03.081761 | orchestrator | Tuesday 03 March 2026 01:13:08 +0000 (0:00:04.073) 0:00:17.413 ********* 2026-03-03 01:14:03.081764 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:14:03.081767 | orchestrator | 2026-03-03 01:14:03.081770 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-03 01:14:03.081774 | orchestrator | Tuesday 03 March 2026 01:13:12 +0000 (0:00:04.102) 0:00:21.516 ********* 2026-03-03 01:14:03.081777 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-03 01:14:03.081780 | orchestrator | 2026-03-03 01:14:03.081783 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-03 01:14:03.081786 | orchestrator | Tuesday 03 March 2026 01:13:16 +0000 (0:00:03.939) 0:00:25.456 ********* 2026-03-03 01:14:03.081790 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:03.081793 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:03.081796 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:03.081801 | orchestrator | 2026-03-03 01:14:03.081806 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-03 01:14:03.081810 | orchestrator | Tuesday 03 March 2026 01:13:16 +0000 (0:00:00.427) 0:00:25.883 ********* 2026-03-03 01:14:03.081821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.081842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.081857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.081864 | orchestrator | 2026-03-03 01:14:03.081870 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-03 01:14:03.081875 | orchestrator | Tuesday 03 March 2026 01:13:18 +0000 (0:00:01.636) 0:00:27.519 ********* 2026-03-03 01:14:03.081881 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:03.081886 | orchestrator | 2026-03-03 01:14:03.081891 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-03 01:14:03.081897 | orchestrator | Tuesday 03 March 2026 01:13:18 +0000 (0:00:00.144) 0:00:27.664 ********* 2026-03-03 01:14:03.081903 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:03.081908 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:03.081913 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:03.081919 | orchestrator | 2026-03-03 01:14:03.081925 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-03 01:14:03.081930 | orchestrator | Tuesday 03 March 2026 01:13:19 +0000 (0:00:00.410) 0:00:28.074 ********* 2026-03-03 01:14:03.081935 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:14:03.081941 | orchestrator | 2026-03-03 01:14:03.081947 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-03 01:14:03.081952 | orchestrator | Tuesday 03 March 2026 01:13:19 +0000 (0:00:00.559) 0:00:28.634 ********* 2026-03-03 01:14:03.081959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.081971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.081985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.081991 | orchestrator | 2026-03-03 01:14:03.081997 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-03 01:14:03.082002 | orchestrator | Tuesday 03 March 2026 01:13:20 +0000 (0:00:01.243) 0:00:29.878 ********* 2026-03-03 01:14:03.082009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082039 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:03.082044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082048 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:03.082056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082066 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:03.082074 | orchestrator | 2026-03-03 01:14:03.082080 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-03 01:14:03.082086 | orchestrator | Tuesday 03 March 2026 01:13:21 +0000 (0:00:00.653) 0:00:30.532 ********* 2026-03-03 01:14:03.082094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082100 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:03.082106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082111 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:03.082117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082123 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:03.082129 | orchestrator | 2026-03-03 01:14:03.082135 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-03 01:14:03.082145 | orchestrator | Tuesday 03 March 2026 01:13:22 +0000 (0:00:00.947) 0:00:31.479 ********* 2026-03-03 01:14:03.082162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082190 | orchestrator | 2026-03-03 01:14:03.082200 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-03 01:14:03.082207 | orchestrator | Tuesday 03 March 2026 01:13:24 +0000 (0:00:01.706) 0:00:33.185 ********* 2026-03-03 01:14:03.082214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082251 | orchestrator | 2026-03-03 01:14:03.082256 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-03 01:14:03.082261 | orchestrator | Tuesday 03 March 2026 01:13:28 +0000 (0:00:04.177) 0:00:37.363 ********* 2026-03-03 01:14:03.082267 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-03 01:14:03.082274 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-03 01:14:03.082286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-03 01:14:03.082291 | orchestrator | 2026-03-03 01:14:03.082297 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-03 01:14:03.082302 | orchestrator | Tuesday 03 March 2026 01:13:29 +0000 (0:00:01.512) 0:00:38.875 ********* 2026-03-03 01:14:03.082308 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:14:03.082314 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:03.082324 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:14:03.082330 | orchestrator | 2026-03-03 01:14:03.082335 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-03 01:14:03.082340 | orchestrator | Tuesday 03 March 2026 01:13:31 +0000 (0:00:01.459) 0:00:40.334 ********* 2026-03-03 01:14:03.082346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082352 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:03.082363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-03 01:14:03.082385 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:03.082393 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:03.082403 | orchestrator | 2026-03-03 01:14:03.082413 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-03 01:14:03.082431 | orchestrator | Tuesday 03 March 2026 01:13:31 +0000 (0:00:00.479) 0:00:40.813 ********* 2026-03-03 01:14:03.082441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-03 01:14:03.082475 | orchestrator | 2026-03-03 01:14:03.082481 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-03 01:14:03.082487 | orchestrator | Tuesday 03 March 2026 01:13:33 +0000 (0:00:01.216) 0:00:42.030 ********* 2026-03-03 01:14:03.082492 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:03.082497 | orchestrator | 2026-03-03 01:14:03.082502 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-03 01:14:03.082508 | orchestrator | Tuesday 03 March 2026 01:13:36 +0000 (0:00:02.975) 0:00:45.006 ********* 2026-03-03 01:14:03.082513 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:03.082519 | orchestrator | 2026-03-03 01:14:03.082524 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-03 01:14:03.082531 | orchestrator | Tuesday 03 March 2026 01:13:38 +0000 (0:00:01.953) 0:00:46.960 ********* 2026-03-03 01:14:03.082537 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:03.082543 | orchestrator | 2026-03-03 01:14:03.082553 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-03 01:14:03.082562 | orchestrator | Tuesday 03 March 2026 01:13:50 +0000 (0:00:12.568) 0:00:59.528 ********* 2026-03-03 01:14:03.082568 | orchestrator | 2026-03-03 01:14:03.082573 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-03 01:14:03.082578 | orchestrator | Tuesday 03 March 2026 01:13:50 +0000 (0:00:00.064) 0:00:59.592 ********* 2026-03-03 01:14:03.082585 | orchestrator | 2026-03-03 01:14:03.082600 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-03 01:14:03.082606 | orchestrator | Tuesday 03 March 2026 01:13:50 +0000 (0:00:00.063) 0:00:59.655 ********* 2026-03-03 01:14:03.082612 | orchestrator | 2026-03-03 01:14:03.082618 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-03 01:14:03.082624 | orchestrator | Tuesday 03 March 2026 01:13:50 +0000 (0:00:00.065) 0:00:59.721 ********* 2026-03-03 01:14:03.082630 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:14:03.082636 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:03.082641 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:14:03.082646 | orchestrator | 2026-03-03 01:14:03.082654 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:14:03.082660 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-03 01:14:03.082668 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:14:03.082674 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:14:03.082679 | orchestrator | 2026-03-03 01:14:03.082685 | orchestrator | 2026-03-03 01:14:03.082690 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:14:03.082700 | orchestrator | Tuesday 03 March 2026 01:14:00 +0000 (0:00:09.728) 0:01:09.450 ********* 2026-03-03 01:14:03.082705 | orchestrator | =============================================================================== 2026-03-03 01:14:03.082715 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.57s 2026-03-03 01:14:03.082721 | orchestrator | placement : Restart placement-api container ----------------------------- 9.73s 2026-03-03 01:14:03.082726 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.76s 2026-03-03 01:14:03.082732 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.18s 2026-03-03 01:14:03.082737 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.10s 2026-03-03 01:14:03.082742 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.07s 2026-03-03 01:14:03.082747 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.94s 2026-03-03 01:14:03.082751 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.36s 2026-03-03 01:14:03.082755 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.09s 2026-03-03 01:14:03.082759 | orchestrator | placement : Creating placement databases -------------------------------- 2.98s 2026-03-03 01:14:03.082762 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.95s 2026-03-03 01:14:03.082766 | orchestrator | placement : Copying over config.json files for services ----------------- 1.71s 2026-03-03 01:14:03.082770 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.64s 2026-03-03 01:14:03.082774 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.51s 2026-03-03 01:14:03.082778 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.46s 2026-03-03 01:14:03.082781 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.24s 2026-03-03 01:14:03.082787 | orchestrator | placement : Check placement containers ---------------------------------- 1.22s 2026-03-03 01:14:03.082792 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.95s 2026-03-03 01:14:03.082798 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.65s 2026-03-03 01:14:03.082804 | orchestrator | placement : include_tasks ----------------------------------------------- 0.56s 2026-03-03 01:14:03.082810 | orchestrator | 2026-03-03 01:14:03 | INFO  | Task df93d93f-e2ae-40f3-9e29-6f4d3c097c60 is in state SUCCESS 2026-03-03 01:14:03.082816 | orchestrator | 2026-03-03 01:14:03 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:03.086693 | orchestrator | 2026-03-03 01:14:03 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:03.089639 | orchestrator | 2026-03-03 01:14:03 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:14:03.090963 | orchestrator | 2026-03-03 01:14:03 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:03.091114 | orchestrator | 2026-03-03 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:06.129388 | orchestrator | 2026-03-03 01:14:06 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:06.132056 | orchestrator | 2026-03-03 01:14:06 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:06.134308 | orchestrator | 2026-03-03 01:14:06 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:14:06.136488 | orchestrator | 2026-03-03 01:14:06 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:06.136550 | orchestrator | 2026-03-03 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:09.170583 | orchestrator | 2026-03-03 01:14:09 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:09.170635 | orchestrator | 2026-03-03 01:14:09 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:09.170655 | orchestrator | 2026-03-03 01:14:09 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:14:09.171264 | orchestrator | 2026-03-03 01:14:09 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:09.171293 | orchestrator | 2026-03-03 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:12.197887 | orchestrator | 2026-03-03 01:14:12 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:12.198189 | orchestrator | 2026-03-03 01:14:12 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:12.199153 | orchestrator | 2026-03-03 01:14:12 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state STARTED 2026-03-03 01:14:12.199740 | orchestrator | 2026-03-03 01:14:12 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:12.199789 | orchestrator | 2026-03-03 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:15.234573 | orchestrator | 2026-03-03 01:14:15 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:15.238073 | orchestrator | 2026-03-03 01:14:15 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:15.239752 | orchestrator | 2026-03-03 01:14:15.239802 | orchestrator | 2026-03-03 01:14:15 | INFO  | Task b72ca968-bd9d-464e-b6ab-423b364d419a is in state SUCCESS 2026-03-03 01:14:15.240938 | orchestrator | 2026-03-03 01:14:15.240983 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:14:15.240990 | orchestrator | 2026-03-03 01:14:15.240996 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:14:15.241002 | orchestrator | Tuesday 03 March 2026 01:09:59 +0000 (0:00:00.239) 0:00:00.240 ********* 2026-03-03 01:14:15.241007 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:15.241014 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:15.241019 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:15.241024 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:14:15.241029 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:14:15.241035 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:14:15.241040 | orchestrator | 2026-03-03 01:14:15.241046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:14:15.241051 | orchestrator | Tuesday 03 March 2026 01:09:59 +0000 (0:00:00.582) 0:00:00.822 ********* 2026-03-03 01:14:15.241057 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-03 01:14:15.241063 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-03 01:14:15.241068 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-03 01:14:15.241073 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-03 01:14:15.241078 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-03 01:14:15.241084 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-03 01:14:15.241089 | orchestrator | 2026-03-03 01:14:15.241094 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-03 01:14:15.241099 | orchestrator | 2026-03-03 01:14:15.241104 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-03 01:14:15.241110 | orchestrator | Tuesday 03 March 2026 01:10:00 +0000 (0:00:00.516) 0:00:01.338 ********* 2026-03-03 01:14:15.241116 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:14:15.241123 | orchestrator | 2026-03-03 01:14:15.241128 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-03 01:14:15.241133 | orchestrator | Tuesday 03 March 2026 01:10:01 +0000 (0:00:01.074) 0:00:02.413 ********* 2026-03-03 01:14:15.241139 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:15.241158 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:15.241164 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:15.241169 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:14:15.241196 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:14:15.241203 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:14:15.241209 | orchestrator | 2026-03-03 01:14:15.241214 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-03 01:14:15.241219 | orchestrator | Tuesday 03 March 2026 01:10:02 +0000 (0:00:01.189) 0:00:03.603 ********* 2026-03-03 01:14:15.241225 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:15.241230 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:15.241235 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:15.241241 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:14:15.241246 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:14:15.241251 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:14:15.241256 | orchestrator | 2026-03-03 01:14:15.241312 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-03 01:14:15.241318 | orchestrator | Tuesday 03 March 2026 01:10:03 +0000 (0:00:01.012) 0:00:04.616 ********* 2026-03-03 01:14:15.241324 | orchestrator | ok: [testbed-node-0] => { 2026-03-03 01:14:15.241330 | orchestrator |  "changed": false, 2026-03-03 01:14:15.241335 | orchestrator |  "msg": "All assertions passed" 2026-03-03 01:14:15.241638 | orchestrator | } 2026-03-03 01:14:15.241650 | orchestrator | ok: [testbed-node-1] => { 2026-03-03 01:14:15.241654 | orchestrator |  "changed": false, 2026-03-03 01:14:15.241657 | orchestrator |  "msg": "All assertions passed" 2026-03-03 01:14:15.241661 | orchestrator | } 2026-03-03 01:14:15.241665 | orchestrator | ok: [testbed-node-2] => { 2026-03-03 01:14:15.241669 | orchestrator |  "changed": false, 2026-03-03 01:14:15.241672 | orchestrator |  "msg": "All assertions passed" 2026-03-03 01:14:15.241676 | orchestrator | } 2026-03-03 01:14:15.241679 | orchestrator | ok: [testbed-node-3] => { 2026-03-03 01:14:15.241683 | orchestrator |  "changed": false, 2026-03-03 01:14:15.241687 | orchestrator |  "msg": "All assertions passed" 2026-03-03 01:14:15.241690 | orchestrator | } 2026-03-03 01:14:15.241694 | orchestrator | ok: [testbed-node-4] => { 2026-03-03 01:14:15.241697 | orchestrator |  "changed": false, 2026-03-03 01:14:15.241701 | orchestrator |  "msg": "All assertions passed" 2026-03-03 01:14:15.241704 | orchestrator | } 2026-03-03 01:14:15.241708 | orchestrator | ok: [testbed-node-5] => { 2026-03-03 01:14:15.241712 | orchestrator |  "changed": false, 2026-03-03 01:14:15.241715 | orchestrator |  "msg": "All assertions passed" 2026-03-03 01:14:15.241719 | orchestrator | } 2026-03-03 01:14:15.241722 | orchestrator | 2026-03-03 01:14:15.241726 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-03 01:14:15.241730 | orchestrator | Tuesday 03 March 2026 01:10:04 +0000 (0:00:00.705) 0:00:05.322 ********* 2026-03-03 01:14:15.241734 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.241737 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.241741 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.241744 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.241747 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.241751 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.241755 | orchestrator | 2026-03-03 01:14:15.241758 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-03 01:14:15.241762 | orchestrator | Tuesday 03 March 2026 01:10:04 +0000 (0:00:00.523) 0:00:05.845 ********* 2026-03-03 01:14:15.241765 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-03 01:14:15.241769 | orchestrator | 2026-03-03 01:14:15.241779 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-03 01:14:15.241783 | orchestrator | Tuesday 03 March 2026 01:10:07 +0000 (0:00:02.981) 0:00:08.827 ********* 2026-03-03 01:14:15.241787 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-03 01:14:15.241791 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-03 01:14:15.241804 | orchestrator | 2026-03-03 01:14:15.241835 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-03 01:14:15.241842 | orchestrator | Tuesday 03 March 2026 01:10:14 +0000 (0:00:06.194) 0:00:15.022 ********* 2026-03-03 01:14:15.241847 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:14:15.241852 | orchestrator | 2026-03-03 01:14:15.241857 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-03 01:14:15.241861 | orchestrator | Tuesday 03 March 2026 01:10:17 +0000 (0:00:03.174) 0:00:18.196 ********* 2026-03-03 01:14:15.241866 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-03 01:14:15.241871 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:14:15.241875 | orchestrator | 2026-03-03 01:14:15.241880 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-03 01:14:15.241885 | orchestrator | Tuesday 03 March 2026 01:10:20 +0000 (0:00:03.562) 0:00:21.758 ********* 2026-03-03 01:14:15.241890 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:14:15.241895 | orchestrator | 2026-03-03 01:14:15.241900 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-03 01:14:15.241905 | orchestrator | Tuesday 03 March 2026 01:10:24 +0000 (0:00:03.166) 0:00:24.924 ********* 2026-03-03 01:14:15.241910 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-03 01:14:15.241915 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-03 01:14:15.241920 | orchestrator | 2026-03-03 01:14:15.241926 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-03 01:14:15.241931 | orchestrator | Tuesday 03 March 2026 01:10:31 +0000 (0:00:07.299) 0:00:32.224 ********* 2026-03-03 01:14:15.241936 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.241942 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.241948 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.241953 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.241957 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.241962 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.241965 | orchestrator | 2026-03-03 01:14:15.241969 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-03 01:14:15.241973 | orchestrator | Tuesday 03 March 2026 01:10:32 +0000 (0:00:00.736) 0:00:32.961 ********* 2026-03-03 01:14:15.241976 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.241980 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.241983 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.241987 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.241990 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.241994 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.241997 | orchestrator | 2026-03-03 01:14:15.242001 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-03 01:14:15.242004 | orchestrator | Tuesday 03 March 2026 01:10:34 +0000 (0:00:02.532) 0:00:35.493 ********* 2026-03-03 01:14:15.242008 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:15.242033 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:15.242037 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:15.242041 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:14:15.242045 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:14:15.242049 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:14:15.242052 | orchestrator | 2026-03-03 01:14:15.242056 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-03 01:14:15.242059 | orchestrator | Tuesday 03 March 2026 01:10:35 +0000 (0:00:01.063) 0:00:36.556 ********* 2026-03-03 01:14:15.242063 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242067 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242070 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242074 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242082 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242086 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242089 | orchestrator | 2026-03-03 01:14:15.242093 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-03 01:14:15.242096 | orchestrator | Tuesday 03 March 2026 01:10:37 +0000 (0:00:02.137) 0:00:38.694 ********* 2026-03-03 01:14:15.242102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242154 | orchestrator | 2026-03-03 01:14:15.242157 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-03 01:14:15.242161 | orchestrator | Tuesday 03 March 2026 01:10:41 +0000 (0:00:03.773) 0:00:42.467 ********* 2026-03-03 01:14:15.242165 | orchestrator | [WARNING]: Skipped 2026-03-03 01:14:15.242170 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-03 01:14:15.242174 | orchestrator | due to this access issue: 2026-03-03 01:14:15.242178 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-03 01:14:15.242182 | orchestrator | a directory 2026-03-03 01:14:15.242185 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:14:15.242189 | orchestrator | 2026-03-03 01:14:15.242192 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-03 01:14:15.242205 | orchestrator | Tuesday 03 March 2026 01:10:42 +0000 (0:00:00.865) 0:00:43.333 ********* 2026-03-03 01:14:15.242209 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:14:15.242214 | orchestrator | 2026-03-03 01:14:15.242218 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-03 01:14:15.242221 | orchestrator | Tuesday 03 March 2026 01:10:43 +0000 (0:00:01.193) 0:00:44.526 ********* 2026-03-03 01:14:15.242225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242263 | orchestrator | 2026-03-03 01:14:15.242267 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-03 01:14:15.242271 | orchestrator | Tuesday 03 March 2026 01:10:47 +0000 (0:00:03.499) 0:00:48.026 ********* 2026-03-03 01:14:15.242275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242281 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242289 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242299 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242317 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242333 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242346 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242351 | orchestrator | 2026-03-03 01:14:15.242356 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-03 01:14:15.242361 | orchestrator | Tuesday 03 March 2026 01:10:50 +0000 (0:00:03.795) 0:00:51.821 ********* 2026-03-03 01:14:15.242365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242370 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242392 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242407 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242418 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242441 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242453 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242457 | orchestrator | 2026-03-03 01:14:15.242461 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-03 01:14:15.242467 | orchestrator | Tuesday 03 March 2026 01:10:53 +0000 (0:00:02.336) 0:00:54.157 ********* 2026-03-03 01:14:15.242471 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242474 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242478 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242481 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242484 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242488 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242492 | orchestrator | 2026-03-03 01:14:15.242496 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-03 01:14:15.242503 | orchestrator | Tuesday 03 March 2026 01:10:56 +0000 (0:00:02.749) 0:00:56.907 ********* 2026-03-03 01:14:15.242507 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242511 | orchestrator | 2026-03-03 01:14:15.242514 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-03 01:14:15.242521 | orchestrator | Tuesday 03 March 2026 01:10:56 +0000 (0:00:00.103) 0:00:57.010 ********* 2026-03-03 01:14:15.242524 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242528 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242531 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242535 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242539 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242542 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242546 | orchestrator | 2026-03-03 01:14:15.242549 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-03 01:14:15.242553 | orchestrator | Tuesday 03 March 2026 01:10:56 +0000 (0:00:00.807) 0:00:57.818 ********* 2026-03-03 01:14:15.242556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242560 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242568 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242575 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242590 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242597 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242605 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242608 | orchestrator | 2026-03-03 01:14:15.242612 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-03 01:14:15.242615 | orchestrator | Tuesday 03 March 2026 01:11:00 +0000 (0:00:03.144) 0:01:00.963 ********* 2026-03-03 01:14:15.242619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242650 | orchestrator | 2026-03-03 01:14:15.242654 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-03 01:14:15.242657 | orchestrator | Tuesday 03 March 2026 01:11:04 +0000 (0:00:04.647) 0:01:05.610 ********* 2026-03-03 01:14:15.242661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.242694 | orchestrator | 2026-03-03 01:14:15.242700 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-03 01:14:15.242703 | orchestrator | Tuesday 03 March 2026 01:11:11 +0000 (0:00:07.078) 0:01:12.689 ********* 2026-03-03 01:14:15.242710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242714 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242722 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242729 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.242738 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242748 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242758 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242761 | orchestrator | 2026-03-03 01:14:15.242765 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-03 01:14:15.242768 | orchestrator | Tuesday 03 March 2026 01:11:15 +0000 (0:00:04.191) 0:01:16.880 ********* 2026-03-03 01:14:15.242772 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242775 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242779 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:14:15.242782 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242786 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:15.242790 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:14:15.242793 | orchestrator | 2026-03-03 01:14:15.242797 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-03 01:14:15.242801 | orchestrator | Tuesday 03 March 2026 01:11:19 +0000 (0:00:03.391) 0:01:20.272 ********* 2026-03-03 01:14:15.242804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242808 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242818 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.242825 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.242851 | orchestrator | 2026-03-03 01:14:15.242854 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-03 01:14:15.242858 | orchestrator | Tuesday 03 March 2026 01:11:24 +0000 (0:00:05.081) 0:01:25.354 ********* 2026-03-03 01:14:15.242862 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242865 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242869 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242872 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242876 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242880 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242883 | orchestrator | 2026-03-03 01:14:15.242887 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-03 01:14:15.242890 | orchestrator | Tuesday 03 March 2026 01:11:27 +0000 (0:00:02.619) 0:01:27.973 ********* 2026-03-03 01:14:15.242894 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242897 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242901 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242904 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242908 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242912 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242915 | orchestrator | 2026-03-03 01:14:15.242919 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-03 01:14:15.242922 | orchestrator | Tuesday 03 March 2026 01:11:29 +0000 (0:00:02.921) 0:01:30.894 ********* 2026-03-03 01:14:15.242926 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242929 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242933 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242937 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242940 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242943 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242947 | orchestrator | 2026-03-03 01:14:15.242950 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-03 01:14:15.242954 | orchestrator | Tuesday 03 March 2026 01:11:33 +0000 (0:00:03.282) 0:01:34.176 ********* 2026-03-03 01:14:15.242958 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242961 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.242965 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242970 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.242974 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.242977 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.242981 | orchestrator | 2026-03-03 01:14:15.242984 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-03 01:14:15.242988 | orchestrator | Tuesday 03 March 2026 01:11:35 +0000 (0:00:02.315) 0:01:36.491 ********* 2026-03-03 01:14:15.242992 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.242996 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.242999 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243003 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243009 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243012 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243016 | orchestrator | 2026-03-03 01:14:15.243020 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-03 01:14:15.243023 | orchestrator | Tuesday 03 March 2026 01:11:37 +0000 (0:00:01.878) 0:01:38.370 ********* 2026-03-03 01:14:15.243028 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243033 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243039 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243043 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243046 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243050 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243053 | orchestrator | 2026-03-03 01:14:15.243059 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-03 01:14:15.243063 | orchestrator | Tuesday 03 March 2026 01:11:39 +0000 (0:00:01.771) 0:01:40.142 ********* 2026-03-03 01:14:15.243066 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-03 01:14:15.243070 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243073 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-03 01:14:15.243156 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243161 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-03 01:14:15.243164 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243168 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-03 01:14:15.243171 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243175 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-03 01:14:15.243179 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243182 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-03 01:14:15.243186 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243189 | orchestrator | 2026-03-03 01:14:15.243193 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-03 01:14:15.243196 | orchestrator | Tuesday 03 March 2026 01:11:41 +0000 (0:00:01.861) 0:01:42.004 ********* 2026-03-03 01:14:15.243200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243204 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243211 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243228 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243235 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243243 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243250 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243254 | orchestrator | 2026-03-03 01:14:15.243257 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-03 01:14:15.243261 | orchestrator | Tuesday 03 March 2026 01:11:42 +0000 (0:00:01.866) 0:01:43.870 ********* 2026-03-03 01:14:15.243267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243273 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243284 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243294 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243310 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243321 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243338 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243344 | orchestrator | 2026-03-03 01:14:15.243350 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-03 01:14:15.243356 | orchestrator | Tuesday 03 March 2026 01:11:45 +0000 (0:00:02.116) 0:01:45.987 ********* 2026-03-03 01:14:15.243362 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243371 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243376 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243382 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243388 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243393 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243400 | orchestrator | 2026-03-03 01:14:15.243406 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-03 01:14:15.243411 | orchestrator | Tuesday 03 March 2026 01:11:47 +0000 (0:00:02.508) 0:01:48.496 ********* 2026-03-03 01:14:15.243416 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243437 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243443 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243448 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:14:15.243453 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:14:15.243458 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:14:15.243464 | orchestrator | 2026-03-03 01:14:15.243469 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-03 01:14:15.243474 | orchestrator | Tuesday 03 March 2026 01:11:52 +0000 (0:00:04.588) 0:01:53.084 ********* 2026-03-03 01:14:15.243479 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243485 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243489 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243494 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243500 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243504 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243507 | orchestrator | 2026-03-03 01:14:15.243510 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-03 01:14:15.243513 | orchestrator | Tuesday 03 March 2026 01:11:54 +0000 (0:00:01.865) 0:01:54.949 ********* 2026-03-03 01:14:15.243517 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243520 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243523 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243529 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243534 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243539 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243544 | orchestrator | 2026-03-03 01:14:15.243549 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-03 01:14:15.243555 | orchestrator | Tuesday 03 March 2026 01:11:55 +0000 (0:00:01.733) 0:01:56.683 ********* 2026-03-03 01:14:15.243560 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243565 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243570 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243576 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243581 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243586 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243590 | orchestrator | 2026-03-03 01:14:15.243593 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-03 01:14:15.243597 | orchestrator | Tuesday 03 March 2026 01:11:57 +0000 (0:00:01.814) 0:01:58.498 ********* 2026-03-03 01:14:15.243604 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243607 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243610 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243614 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243619 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243624 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243629 | orchestrator | 2026-03-03 01:14:15.243635 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-03 01:14:15.243640 | orchestrator | Tuesday 03 March 2026 01:12:00 +0000 (0:00:02.441) 0:02:00.940 ********* 2026-03-03 01:14:15.243645 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243651 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243656 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243661 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243666 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243678 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243684 | orchestrator | 2026-03-03 01:14:15.243692 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-03 01:14:15.243695 | orchestrator | Tuesday 03 March 2026 01:12:03 +0000 (0:00:03.414) 0:02:04.354 ********* 2026-03-03 01:14:15.243698 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243703 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243709 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243714 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243719 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243725 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243730 | orchestrator | 2026-03-03 01:14:15.243735 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-03 01:14:15.243741 | orchestrator | Tuesday 03 March 2026 01:12:06 +0000 (0:00:02.910) 0:02:07.264 ********* 2026-03-03 01:14:15.243746 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243751 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243756 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243762 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243767 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243772 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243778 | orchestrator | 2026-03-03 01:14:15.243783 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-03 01:14:15.243789 | orchestrator | Tuesday 03 March 2026 01:12:08 +0000 (0:00:02.163) 0:02:09.427 ********* 2026-03-03 01:14:15.243794 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-03 01:14:15.243800 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243809 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-03 01:14:15.243815 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243821 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-03 01:14:15.243826 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243831 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-03 01:14:15.243836 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243846 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-03 01:14:15.243851 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243856 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-03 01:14:15.243861 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243866 | orchestrator | 2026-03-03 01:14:15.243872 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-03 01:14:15.243877 | orchestrator | Tuesday 03 March 2026 01:12:10 +0000 (0:00:02.106) 0:02:11.534 ********* 2026-03-03 01:14:15.243883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243893 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.243899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243905 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.243910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-03 01:14:15.243916 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.243924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243931 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.243939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243948 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.243954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-03 01:14:15.243959 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.243964 | orchestrator | 2026-03-03 01:14:15.243969 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-03 01:14:15.243974 | orchestrator | Tuesday 03 March 2026 01:12:12 +0000 (0:00:02.013) 0:02:13.548 ********* 2026-03-03 01:14:15.243979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.243985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.243999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-03 01:14:15.244010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.244017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.244022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-03 01:14:15.244028 | orchestrator | 2026-03-03 01:14:15.244033 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-03 01:14:15.244039 | orchestrator | Tuesday 03 March 2026 01:12:16 +0000 (0:00:04.139) 0:02:17.687 ********* 2026-03-03 01:14:15.244044 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:15.244050 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:15.244056 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:15.244062 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:14:15.244067 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:14:15.244073 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:14:15.244079 | orchestrator | 2026-03-03 01:14:15.244084 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-03 01:14:15.244090 | orchestrator | Tuesday 03 March 2026 01:12:17 +0000 (0:00:00.715) 0:02:18.403 ********* 2026-03-03 01:14:15.244096 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:15.244101 | orchestrator | 2026-03-03 01:14:15.244107 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-03 01:14:15.244112 | orchestrator | Tuesday 03 March 2026 01:12:19 +0000 (0:00:01.963) 0:02:20.367 ********* 2026-03-03 01:14:15.244118 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:15.244124 | orchestrator | 2026-03-03 01:14:15.244130 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-03 01:14:15.244135 | orchestrator | Tuesday 03 March 2026 01:12:21 +0000 (0:00:02.004) 0:02:22.371 ********* 2026-03-03 01:14:15.244151 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:15.244157 | orchestrator | 2026-03-03 01:14:15.244163 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-03 01:14:15.244168 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:37.827) 0:03:00.199 ********* 2026-03-03 01:14:15.244173 | orchestrator | 2026-03-03 01:14:15.244178 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-03 01:14:15.244186 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:00.127) 0:03:00.327 ********* 2026-03-03 01:14:15.244192 | orchestrator | 2026-03-03 01:14:15.244197 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-03 01:14:15.244202 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:00.246) 0:03:00.574 ********* 2026-03-03 01:14:15.244207 | orchestrator | 2026-03-03 01:14:15.244213 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-03 01:14:15.244218 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:00.066) 0:03:00.640 ********* 2026-03-03 01:14:15.244223 | orchestrator | 2026-03-03 01:14:15.244231 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-03 01:14:15.244237 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:00.065) 0:03:00.706 ********* 2026-03-03 01:14:15.244242 | orchestrator | 2026-03-03 01:14:15.244248 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-03 01:14:15.244253 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:00.070) 0:03:00.776 ********* 2026-03-03 01:14:15.244259 | orchestrator | 2026-03-03 01:14:15.244264 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-03 01:14:15.244270 | orchestrator | Tuesday 03 March 2026 01:12:59 +0000 (0:00:00.070) 0:03:00.846 ********* 2026-03-03 01:14:15.244275 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:15.244280 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:14:15.244286 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:14:15.244292 | orchestrator | 2026-03-03 01:14:15.244297 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-03 01:14:15.244303 | orchestrator | Tuesday 03 March 2026 01:13:21 +0000 (0:00:21.665) 0:03:22.512 ********* 2026-03-03 01:14:15.244308 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:14:15.244313 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:14:15.244318 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:14:15.244323 | orchestrator | 2026-03-03 01:14:15.244328 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:14:15.244334 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 01:14:15.244340 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-03 01:14:15.244346 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-03 01:14:15.244351 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 01:14:15.244356 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 01:14:15.244361 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-03 01:14:15.244366 | orchestrator | 2026-03-03 01:14:15.244372 | orchestrator | 2026-03-03 01:14:15.244377 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:14:15.244382 | orchestrator | Tuesday 03 March 2026 01:14:11 +0000 (0:00:50.324) 0:04:12.836 ********* 2026-03-03 01:14:15.244387 | orchestrator | =============================================================================== 2026-03-03 01:14:15.244396 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.32s 2026-03-03 01:14:15.244401 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.83s 2026-03-03 01:14:15.244406 | orchestrator | neutron : Restart neutron-server container ----------------------------- 21.67s 2026-03-03 01:14:15.244411 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.30s 2026-03-03 01:14:15.244416 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.08s 2026-03-03 01:14:15.244447 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.19s 2026-03-03 01:14:15.244453 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.08s 2026-03-03 01:14:15.244458 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.65s 2026-03-03 01:14:15.244463 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.59s 2026-03-03 01:14:15.244468 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.19s 2026-03-03 01:14:15.244473 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.14s 2026-03-03 01:14:15.244479 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.80s 2026-03-03 01:14:15.244484 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.77s 2026-03-03 01:14:15.244489 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.56s 2026-03-03 01:14:15.244494 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.50s 2026-03-03 01:14:15.244499 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.41s 2026-03-03 01:14:15.244503 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.39s 2026-03-03 01:14:15.244506 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.28s 2026-03-03 01:14:15.244509 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.17s 2026-03-03 01:14:15.244515 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.17s 2026-03-03 01:14:15.244518 | orchestrator | 2026-03-03 01:14:15 | INFO  | Task 8af7b478-46ae-41d0-9b96-b8d65616fdc0 is in state STARTED 2026-03-03 01:14:15.244521 | orchestrator | 2026-03-03 01:14:15 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:15.244525 | orchestrator | 2026-03-03 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:18.286669 | orchestrator | 2026-03-03 01:14:18 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:18.287745 | orchestrator | 2026-03-03 01:14:18 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:18.289596 | orchestrator | 2026-03-03 01:14:18 | INFO  | Task 8af7b478-46ae-41d0-9b96-b8d65616fdc0 is in state STARTED 2026-03-03 01:14:18.290379 | orchestrator | 2026-03-03 01:14:18 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:18.290496 | orchestrator | 2026-03-03 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:21.325962 | orchestrator | 2026-03-03 01:14:21 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:21.327825 | orchestrator | 2026-03-03 01:14:21 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:21.329054 | orchestrator | 2026-03-03 01:14:21 | INFO  | Task 8af7b478-46ae-41d0-9b96-b8d65616fdc0 is in state SUCCESS 2026-03-03 01:14:21.330976 | orchestrator | 2026-03-03 01:14:21 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:21.331012 | orchestrator | 2026-03-03 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:24.371442 | orchestrator | 2026-03-03 01:14:24 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:24.372398 | orchestrator | 2026-03-03 01:14:24 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:24.373506 | orchestrator | 2026-03-03 01:14:24 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:24.374316 | orchestrator | 2026-03-03 01:14:24 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:24.374544 | orchestrator | 2026-03-03 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:27.420651 | orchestrator | 2026-03-03 01:14:27 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:27.421838 | orchestrator | 2026-03-03 01:14:27 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:27.423201 | orchestrator | 2026-03-03 01:14:27 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:27.424608 | orchestrator | 2026-03-03 01:14:27 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:27.424647 | orchestrator | 2026-03-03 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:30.463128 | orchestrator | 2026-03-03 01:14:30 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:30.464196 | orchestrator | 2026-03-03 01:14:30 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:30.465491 | orchestrator | 2026-03-03 01:14:30 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:30.466592 | orchestrator | 2026-03-03 01:14:30 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:30.466769 | orchestrator | 2026-03-03 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:33.508794 | orchestrator | 2026-03-03 01:14:33 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:33.509308 | orchestrator | 2026-03-03 01:14:33 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:33.511390 | orchestrator | 2026-03-03 01:14:33 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:33.512252 | orchestrator | 2026-03-03 01:14:33 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:33.512284 | orchestrator | 2026-03-03 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:36.548685 | orchestrator | 2026-03-03 01:14:36 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:36.550302 | orchestrator | 2026-03-03 01:14:36 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:36.551949 | orchestrator | 2026-03-03 01:14:36 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:36.553639 | orchestrator | 2026-03-03 01:14:36 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:36.553700 | orchestrator | 2026-03-03 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:39.595228 | orchestrator | 2026-03-03 01:14:39 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:39.596917 | orchestrator | 2026-03-03 01:14:39 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:39.598950 | orchestrator | 2026-03-03 01:14:39 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:39.601392 | orchestrator | 2026-03-03 01:14:39 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:39.601993 | orchestrator | 2026-03-03 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:42.640798 | orchestrator | 2026-03-03 01:14:42 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:42.641411 | orchestrator | 2026-03-03 01:14:42 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:42.642557 | orchestrator | 2026-03-03 01:14:42 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:42.643724 | orchestrator | 2026-03-03 01:14:42 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:42.643759 | orchestrator | 2026-03-03 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:45.678403 | orchestrator | 2026-03-03 01:14:45 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:45.678522 | orchestrator | 2026-03-03 01:14:45 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:45.680329 | orchestrator | 2026-03-03 01:14:45 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:45.680771 | orchestrator | 2026-03-03 01:14:45 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:45.681014 | orchestrator | 2026-03-03 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:48.728785 | orchestrator | 2026-03-03 01:14:48 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:48.728839 | orchestrator | 2026-03-03 01:14:48 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:48.730127 | orchestrator | 2026-03-03 01:14:48 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:48.732152 | orchestrator | 2026-03-03 01:14:48 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:48.732210 | orchestrator | 2026-03-03 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:51.769798 | orchestrator | 2026-03-03 01:14:51 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:51.773154 | orchestrator | 2026-03-03 01:14:51 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:51.773282 | orchestrator | 2026-03-03 01:14:51 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:51.774158 | orchestrator | 2026-03-03 01:14:51 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:51.774198 | orchestrator | 2026-03-03 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:54.818834 | orchestrator | 2026-03-03 01:14:54 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:54.820790 | orchestrator | 2026-03-03 01:14:54 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:54.823215 | orchestrator | 2026-03-03 01:14:54 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state STARTED 2026-03-03 01:14:54.825982 | orchestrator | 2026-03-03 01:14:54 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:54.826089 | orchestrator | 2026-03-03 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:14:57.862319 | orchestrator | 2026-03-03 01:14:57 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:14:57.863956 | orchestrator | 2026-03-03 01:14:57 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:14:57.866697 | orchestrator | 2026-03-03 01:14:57 | INFO  | Task 898438c9-2799-4489-bcd0-a5e9652b6e53 is in state SUCCESS 2026-03-03 01:14:57.868793 | orchestrator | 2026-03-03 01:14:57.868853 | orchestrator | 2026-03-03 01:14:57.868862 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:14:57.868870 | orchestrator | 2026-03-03 01:14:57.868876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:14:57.868883 | orchestrator | Tuesday 03 March 2026 01:14:17 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-03-03 01:14:57.868890 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:57.868896 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:57.868903 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:57.868908 | orchestrator | 2026-03-03 01:14:57.868914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:14:57.868921 | orchestrator | Tuesday 03 March 2026 01:14:17 +0000 (0:00:00.424) 0:00:00.612 ********* 2026-03-03 01:14:57.868928 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-03 01:14:57.868935 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-03 01:14:57.868941 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-03 01:14:57.868947 | orchestrator | 2026-03-03 01:14:57.868953 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-03 01:14:57.868959 | orchestrator | 2026-03-03 01:14:57.868966 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-03 01:14:57.868972 | orchestrator | Tuesday 03 March 2026 01:14:19 +0000 (0:00:01.412) 0:00:02.025 ********* 2026-03-03 01:14:57.868979 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:57.868985 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:57.868992 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:57.868999 | orchestrator | 2026-03-03 01:14:57.869005 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:14:57.869012 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:14:57.869019 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:14:57.869026 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:14:57.869032 | orchestrator | 2026-03-03 01:14:57.869039 | orchestrator | 2026-03-03 01:14:57.869045 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:14:57.869051 | orchestrator | Tuesday 03 March 2026 01:14:20 +0000 (0:00:00.969) 0:00:02.994 ********* 2026-03-03 01:14:57.869058 | orchestrator | =============================================================================== 2026-03-03 01:14:57.869064 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2026-03-03 01:14:57.869071 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.97s 2026-03-03 01:14:57.869077 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-03-03 01:14:57.869083 | orchestrator | 2026-03-03 01:14:57.869089 | orchestrator | 2026-03-03 01:14:57.869131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:14:57.869139 | orchestrator | 2026-03-03 01:14:57.869145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:14:57.869151 | orchestrator | Tuesday 03 March 2026 01:13:17 +0000 (0:00:00.515) 0:00:00.515 ********* 2026-03-03 01:14:57.869156 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:57.869381 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:57.869397 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:57.869414 | orchestrator | 2026-03-03 01:14:57.869420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:14:57.869426 | orchestrator | Tuesday 03 March 2026 01:13:17 +0000 (0:00:00.787) 0:00:01.304 ********* 2026-03-03 01:14:57.869548 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-03 01:14:57.869557 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-03 01:14:57.869563 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-03 01:14:57.869569 | orchestrator | 2026-03-03 01:14:57.869574 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-03 01:14:57.869580 | orchestrator | 2026-03-03 01:14:57.869586 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-03 01:14:57.869592 | orchestrator | Tuesday 03 March 2026 01:13:18 +0000 (0:00:00.599) 0:00:01.903 ********* 2026-03-03 01:14:57.869597 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:14:57.869603 | orchestrator | 2026-03-03 01:14:57.869608 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-03 01:14:57.869613 | orchestrator | Tuesday 03 March 2026 01:13:19 +0000 (0:00:00.757) 0:00:02.661 ********* 2026-03-03 01:14:57.869620 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-03 01:14:57.869625 | orchestrator | 2026-03-03 01:14:57.869631 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-03 01:14:57.869636 | orchestrator | Tuesday 03 March 2026 01:13:22 +0000 (0:00:03.221) 0:00:05.883 ********* 2026-03-03 01:14:57.869642 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-03 01:14:57.869647 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-03 01:14:57.869653 | orchestrator | 2026-03-03 01:14:57.869658 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-03 01:14:57.869664 | orchestrator | Tuesday 03 March 2026 01:13:28 +0000 (0:00:06.194) 0:00:12.077 ********* 2026-03-03 01:14:57.869670 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:14:57.869675 | orchestrator | 2026-03-03 01:14:57.869681 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-03 01:14:57.869687 | orchestrator | Tuesday 03 March 2026 01:13:31 +0000 (0:00:03.173) 0:00:15.250 ********* 2026-03-03 01:14:57.869716 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-03 01:14:57.869722 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:14:57.869728 | orchestrator | 2026-03-03 01:14:57.869734 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-03 01:14:57.869740 | orchestrator | Tuesday 03 March 2026 01:13:35 +0000 (0:00:04.059) 0:00:19.310 ********* 2026-03-03 01:14:57.869746 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:14:57.869751 | orchestrator | 2026-03-03 01:14:57.869757 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-03 01:14:57.869763 | orchestrator | Tuesday 03 March 2026 01:13:39 +0000 (0:00:03.151) 0:00:22.462 ********* 2026-03-03 01:14:57.869768 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-03 01:14:57.869773 | orchestrator | 2026-03-03 01:14:57.869779 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-03 01:14:57.869785 | orchestrator | Tuesday 03 March 2026 01:13:42 +0000 (0:00:03.416) 0:00:25.879 ********* 2026-03-03 01:14:57.869791 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.869797 | orchestrator | 2026-03-03 01:14:57.869803 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-03 01:14:57.869809 | orchestrator | Tuesday 03 March 2026 01:13:45 +0000 (0:00:03.202) 0:00:29.081 ********* 2026-03-03 01:14:57.869815 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.869821 | orchestrator | 2026-03-03 01:14:57.869827 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-03 01:14:57.869833 | orchestrator | Tuesday 03 March 2026 01:13:49 +0000 (0:00:04.055) 0:00:33.136 ********* 2026-03-03 01:14:57.869839 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.869862 | orchestrator | 2026-03-03 01:14:57.869869 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-03 01:14:57.869876 | orchestrator | Tuesday 03 March 2026 01:13:53 +0000 (0:00:03.508) 0:00:36.645 ********* 2026-03-03 01:14:57.869885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.869894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.869901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.869950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.869959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.869978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.869985 | orchestrator | 2026-03-03 01:14:57.869992 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-03 01:14:57.869999 | orchestrator | Tuesday 03 March 2026 01:13:55 +0000 (0:00:01.729) 0:00:38.374 ********* 2026-03-03 01:14:57.870007 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:57.870131 | orchestrator | 2026-03-03 01:14:57.870155 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-03 01:14:57.870164 | orchestrator | Tuesday 03 March 2026 01:13:55 +0000 (0:00:00.201) 0:00:38.576 ********* 2026-03-03 01:14:57.870172 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:57.870179 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:57.870186 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:57.870192 | orchestrator | 2026-03-03 01:14:57.870198 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-03 01:14:57.870205 | orchestrator | Tuesday 03 March 2026 01:13:55 +0000 (0:00:00.703) 0:00:39.279 ********* 2026-03-03 01:14:57.870212 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:14:57.870219 | orchestrator | 2026-03-03 01:14:57.870226 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-03 01:14:57.870234 | orchestrator | Tuesday 03 March 2026 01:13:56 +0000 (0:00:00.935) 0:00:40.215 ********* 2026-03-03 01:14:57.870242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870308 | orchestrator | 2026-03-03 01:14:57.870314 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-03 01:14:57.870320 | orchestrator | Tuesday 03 March 2026 01:13:59 +0000 (0:00:02.486) 0:00:42.701 ********* 2026-03-03 01:14:57.870326 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:14:57.870333 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:14:57.870339 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:14:57.870345 | orchestrator | 2026-03-03 01:14:57.870396 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-03 01:14:57.870427 | orchestrator | Tuesday 03 March 2026 01:13:59 +0000 (0:00:00.296) 0:00:42.998 ********* 2026-03-03 01:14:57.870434 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:14:57.870456 | orchestrator | 2026-03-03 01:14:57.870462 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-03 01:14:57.870467 | orchestrator | Tuesday 03 March 2026 01:14:00 +0000 (0:00:00.637) 0:00:43.635 ********* 2026-03-03 01:14:57.870473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870599 | orchestrator | 2026-03-03 01:14:57.870605 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-03 01:14:57.870616 | orchestrator | Tuesday 03 March 2026 01:14:02 +0000 (0:00:02.250) 0:00:45.886 ********* 2026-03-03 01:14:57.870622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870634 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:57.870641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870691 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:57.870696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870708 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:57.870713 | orchestrator | 2026-03-03 01:14:57.870718 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-03 01:14:57.870724 | orchestrator | Tuesday 03 March 2026 01:14:03 +0000 (0:00:00.542) 0:00:46.429 ********* 2026-03-03 01:14:57.870729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870760 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:57.870770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870782 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:57.870788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870804 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:57.870809 | orchestrator | 2026-03-03 01:14:57.870815 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-03 01:14:57.870820 | orchestrator | Tuesday 03 March 2026 01:14:03 +0000 (0:00:00.894) 0:00:47.323 ********* 2026-03-03 01:14:57.870832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870885 | orchestrator | 2026-03-03 01:14:57.870891 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-03 01:14:57.870896 | orchestrator | Tuesday 03 March 2026 01:14:06 +0000 (0:00:02.470) 0:00:49.794 ********* 2026-03-03 01:14:57.870902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.870923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.870955 | orchestrator | 2026-03-03 01:14:57.870961 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-03 01:14:57.870967 | orchestrator | Tuesday 03 March 2026 01:14:10 +0000 (0:00:04.542) 0:00:54.337 ********* 2026-03-03 01:14:57.870973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.870979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.870988 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:57.870994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.871009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.871015 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:57.871020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-03 01:14:57.871026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:14:57.871035 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:57.871041 | orchestrator | 2026-03-03 01:14:57.871047 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-03 01:14:57.871053 | orchestrator | Tuesday 03 March 2026 01:14:11 +0000 (0:00:00.628) 0:00:54.966 ********* 2026-03-03 01:14:57.871059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.871071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.871078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-03 01:14:57.871084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.871089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.871107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:14:57.871113 | orchestrator | 2026-03-03 01:14:57.871145 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-03 01:14:57.871151 | orchestrator | Tuesday 03 March 2026 01:14:13 +0000 (0:00:02.109) 0:00:57.076 ********* 2026-03-03 01:14:57.871156 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:14:57.871162 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:14:57.871167 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:14:57.871173 | orchestrator | 2026-03-03 01:14:57.871178 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-03 01:14:57.871184 | orchestrator | Tuesday 03 March 2026 01:14:14 +0000 (0:00:00.352) 0:00:57.428 ********* 2026-03-03 01:14:57.871189 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.871194 | orchestrator | 2026-03-03 01:14:57.871200 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-03 01:14:57.871206 | orchestrator | Tuesday 03 March 2026 01:14:16 +0000 (0:00:01.983) 0:00:59.412 ********* 2026-03-03 01:14:57.871212 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.871217 | orchestrator | 2026-03-03 01:14:57.871223 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-03 01:14:57.871240 | orchestrator | Tuesday 03 March 2026 01:14:18 +0000 (0:00:01.981) 0:01:01.393 ********* 2026-03-03 01:14:57.871250 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.871256 | orchestrator | 2026-03-03 01:14:57.871262 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-03 01:14:57.871268 | orchestrator | Tuesday 03 March 2026 01:14:32 +0000 (0:00:14.968) 0:01:16.361 ********* 2026-03-03 01:14:57.871274 | orchestrator | 2026-03-03 01:14:57.871280 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-03 01:14:57.871286 | orchestrator | Tuesday 03 March 2026 01:14:33 +0000 (0:00:00.069) 0:01:16.431 ********* 2026-03-03 01:14:57.871292 | orchestrator | 2026-03-03 01:14:57.871299 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-03 01:14:57.871305 | orchestrator | Tuesday 03 March 2026 01:14:33 +0000 (0:00:00.062) 0:01:16.493 ********* 2026-03-03 01:14:57.871311 | orchestrator | 2026-03-03 01:14:57.871317 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-03 01:14:57.871323 | orchestrator | Tuesday 03 March 2026 01:14:33 +0000 (0:00:00.065) 0:01:16.559 ********* 2026-03-03 01:14:57.871329 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.871335 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:14:57.871341 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:14:57.871347 | orchestrator | 2026-03-03 01:14:57.871353 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-03 01:14:57.871359 | orchestrator | Tuesday 03 March 2026 01:14:45 +0000 (0:00:11.916) 0:01:28.475 ********* 2026-03-03 01:14:57.871371 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:14:57.871376 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:14:57.871382 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:14:57.871387 | orchestrator | 2026-03-03 01:14:57.871393 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:14:57.871399 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-03 01:14:57.871405 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:14:57.871411 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:14:57.871416 | orchestrator | 2026-03-03 01:14:57.871465 | orchestrator | 2026-03-03 01:14:57.871472 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:14:57.871477 | orchestrator | Tuesday 03 March 2026 01:14:54 +0000 (0:00:09.609) 0:01:38.085 ********* 2026-03-03 01:14:57.871483 | orchestrator | =============================================================================== 2026-03-03 01:14:57.871488 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.97s 2026-03-03 01:14:57.871494 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 11.92s 2026-03-03 01:14:57.871499 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.61s 2026-03-03 01:14:57.871505 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.19s 2026-03-03 01:14:57.871510 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.54s 2026-03-03 01:14:57.871516 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.06s 2026-03-03 01:14:57.871521 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.06s 2026-03-03 01:14:57.871526 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.51s 2026-03-03 01:14:57.871532 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.42s 2026-03-03 01:14:57.871537 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.22s 2026-03-03 01:14:57.871543 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.20s 2026-03-03 01:14:57.871548 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.17s 2026-03-03 01:14:57.871553 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.15s 2026-03-03 01:14:57.871559 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.49s 2026-03-03 01:14:57.871565 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.47s 2026-03-03 01:14:57.871571 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.25s 2026-03-03 01:14:57.871577 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.11s 2026-03-03 01:14:57.871583 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.98s 2026-03-03 01:14:57.871590 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.98s 2026-03-03 01:14:57.871595 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.73s 2026-03-03 01:14:57.871601 | orchestrator | 2026-03-03 01:14:57 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:14:57.871606 | orchestrator | 2026-03-03 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:00.909611 | orchestrator | 2026-03-03 01:15:00 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:00.911579 | orchestrator | 2026-03-03 01:15:00 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:00.913669 | orchestrator | 2026-03-03 01:15:00 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:00.913723 | orchestrator | 2026-03-03 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:03.960560 | orchestrator | 2026-03-03 01:15:03 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:03.962896 | orchestrator | 2026-03-03 01:15:03 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:03.964962 | orchestrator | 2026-03-03 01:15:03 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:03.965007 | orchestrator | 2026-03-03 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:07.008159 | orchestrator | 2026-03-03 01:15:07 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:07.011180 | orchestrator | 2026-03-03 01:15:07 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:07.014568 | orchestrator | 2026-03-03 01:15:07 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:07.014634 | orchestrator | 2026-03-03 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:10.052607 | orchestrator | 2026-03-03 01:15:10 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:10.054300 | orchestrator | 2026-03-03 01:15:10 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:10.057853 | orchestrator | 2026-03-03 01:15:10 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:10.057905 | orchestrator | 2026-03-03 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:13.098407 | orchestrator | 2026-03-03 01:15:13 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:13.098831 | orchestrator | 2026-03-03 01:15:13 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:13.100772 | orchestrator | 2026-03-03 01:15:13 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:13.100837 | orchestrator | 2026-03-03 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:16.144894 | orchestrator | 2026-03-03 01:15:16 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:16.147758 | orchestrator | 2026-03-03 01:15:16 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:16.150723 | orchestrator | 2026-03-03 01:15:16 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:16.150782 | orchestrator | 2026-03-03 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:19.197696 | orchestrator | 2026-03-03 01:15:19 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:19.199310 | orchestrator | 2026-03-03 01:15:19 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:19.200980 | orchestrator | 2026-03-03 01:15:19 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:19.201025 | orchestrator | 2026-03-03 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:22.253148 | orchestrator | 2026-03-03 01:15:22 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:22.254808 | orchestrator | 2026-03-03 01:15:22 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:22.256785 | orchestrator | 2026-03-03 01:15:22 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:22.256850 | orchestrator | 2026-03-03 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:25.310226 | orchestrator | 2026-03-03 01:15:25 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:25.312938 | orchestrator | 2026-03-03 01:15:25 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:25.314785 | orchestrator | 2026-03-03 01:15:25 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:25.314925 | orchestrator | 2026-03-03 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:28.365218 | orchestrator | 2026-03-03 01:15:28 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:28.366871 | orchestrator | 2026-03-03 01:15:28 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:28.368482 | orchestrator | 2026-03-03 01:15:28 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:28.368619 | orchestrator | 2026-03-03 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:31.415420 | orchestrator | 2026-03-03 01:15:31 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:31.416874 | orchestrator | 2026-03-03 01:15:31 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:31.419517 | orchestrator | 2026-03-03 01:15:31 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:31.419601 | orchestrator | 2026-03-03 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:34.479845 | orchestrator | 2026-03-03 01:15:34 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:34.481209 | orchestrator | 2026-03-03 01:15:34 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:34.483934 | orchestrator | 2026-03-03 01:15:34 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:34.483979 | orchestrator | 2026-03-03 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:37.517802 | orchestrator | 2026-03-03 01:15:37 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:37.517936 | orchestrator | 2026-03-03 01:15:37 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:37.520438 | orchestrator | 2026-03-03 01:15:37 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:37.520539 | orchestrator | 2026-03-03 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:40.547313 | orchestrator | 2026-03-03 01:15:40 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:40.547474 | orchestrator | 2026-03-03 01:15:40 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:40.548145 | orchestrator | 2026-03-03 01:15:40 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:40.548173 | orchestrator | 2026-03-03 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:43.570906 | orchestrator | 2026-03-03 01:15:43 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:43.571043 | orchestrator | 2026-03-03 01:15:43 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:43.572344 | orchestrator | 2026-03-03 01:15:43 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:43.572401 | orchestrator | 2026-03-03 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:46.613469 | orchestrator | 2026-03-03 01:15:46 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:46.615164 | orchestrator | 2026-03-03 01:15:46 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:46.616367 | orchestrator | 2026-03-03 01:15:46 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:46.616392 | orchestrator | 2026-03-03 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:49.657060 | orchestrator | 2026-03-03 01:15:49 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:49.659414 | orchestrator | 2026-03-03 01:15:49 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:49.661172 | orchestrator | 2026-03-03 01:15:49 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:49.661255 | orchestrator | 2026-03-03 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:52.709326 | orchestrator | 2026-03-03 01:15:52 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:52.712320 | orchestrator | 2026-03-03 01:15:52 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:52.715227 | orchestrator | 2026-03-03 01:15:52 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:52.715577 | orchestrator | 2026-03-03 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:55.764095 | orchestrator | 2026-03-03 01:15:55 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:55.766926 | orchestrator | 2026-03-03 01:15:55 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:55.770051 | orchestrator | 2026-03-03 01:15:55 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:55.770072 | orchestrator | 2026-03-03 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:15:58.815698 | orchestrator | 2026-03-03 01:15:58 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:15:58.820831 | orchestrator | 2026-03-03 01:15:58 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:15:58.826671 | orchestrator | 2026-03-03 01:15:58 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:15:58.826693 | orchestrator | 2026-03-03 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:01.878324 | orchestrator | 2026-03-03 01:16:01 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:01.880954 | orchestrator | 2026-03-03 01:16:01 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state STARTED 2026-03-03 01:16:01.883364 | orchestrator | 2026-03-03 01:16:01 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:01.883632 | orchestrator | 2026-03-03 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:04.922048 | orchestrator | 2026-03-03 01:16:04 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:04.923884 | orchestrator | 2026-03-03 01:16:04 | INFO  | Task bf4ef5ec-c97b-4b51-9360-fbe34e7b9f1f is in state SUCCESS 2026-03-03 01:16:04.925958 | orchestrator | 2026-03-03 01:16:04.926072 | orchestrator | 2026-03-03 01:16:04.926083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:16:04.926089 | orchestrator | 2026-03-03 01:16:04.926093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:16:04.926115 | orchestrator | Tuesday 03 March 2026 01:14:04 +0000 (0:00:00.237) 0:00:00.237 ********* 2026-03-03 01:16:04.926119 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:04.926124 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:16:04.926128 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:16:04.926139 | orchestrator | 2026-03-03 01:16:04.926143 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:16:04.926147 | orchestrator | Tuesday 03 March 2026 01:14:04 +0000 (0:00:00.258) 0:00:00.495 ********* 2026-03-03 01:16:04.926151 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-03 01:16:04.926156 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-03 01:16:04.926160 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-03 01:16:04.926164 | orchestrator | 2026-03-03 01:16:04.926167 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-03 01:16:04.926171 | orchestrator | 2026-03-03 01:16:04.926175 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-03 01:16:04.926179 | orchestrator | Tuesday 03 March 2026 01:14:05 +0000 (0:00:00.356) 0:00:00.852 ********* 2026-03-03 01:16:04.926183 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:04.926187 | orchestrator | 2026-03-03 01:16:04.926191 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-03 01:16:04.926195 | orchestrator | Tuesday 03 March 2026 01:14:05 +0000 (0:00:00.496) 0:00:01.348 ********* 2026-03-03 01:16:04.926201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926216 | orchestrator | 2026-03-03 01:16:04.926221 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-03 01:16:04.926225 | orchestrator | Tuesday 03 March 2026 01:14:06 +0000 (0:00:00.708) 0:00:02.057 ********* 2026-03-03 01:16:04.926229 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-03 01:16:04.926237 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-03 01:16:04.926241 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:16:04.926245 | orchestrator | 2026-03-03 01:16:04.926249 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-03 01:16:04.926253 | orchestrator | Tuesday 03 March 2026 01:14:07 +0000 (0:00:00.782) 0:00:02.839 ********* 2026-03-03 01:16:04.926257 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:04.926261 | orchestrator | 2026-03-03 01:16:04.926264 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-03 01:16:04.926268 | orchestrator | Tuesday 03 March 2026 01:14:07 +0000 (0:00:00.575) 0:00:03.414 ********* 2026-03-03 01:16:04.926530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926707 | orchestrator | 2026-03-03 01:16:04.926712 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-03 01:16:04.926716 | orchestrator | Tuesday 03 March 2026 01:14:09 +0000 (0:00:01.401) 0:00:04.816 ********* 2026-03-03 01:16:04.926721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:16:04.926725 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.926730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:16:04.926741 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.926761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:16:04.926766 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.926769 | orchestrator | 2026-03-03 01:16:04.926773 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-03 01:16:04.926777 | orchestrator | Tuesday 03 March 2026 01:14:09 +0000 (0:00:00.381) 0:00:05.198 ********* 2026-03-03 01:16:04.926782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:16:04.926786 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.926789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:16:04.926793 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.926797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-03 01:16:04.926801 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.926805 | orchestrator | 2026-03-03 01:16:04.926809 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-03 01:16:04.926816 | orchestrator | Tuesday 03 March 2026 01:14:10 +0000 (0:00:00.930) 0:00:06.128 ********* 2026-03-03 01:16:04.926820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926843 | orchestrator | 2026-03-03 01:16:04.926847 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-03 01:16:04.926851 | orchestrator | Tuesday 03 March 2026 01:14:11 +0000 (0:00:01.210) 0:00:07.339 ********* 2026-03-03 01:16:04.926855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.926870 | orchestrator | 2026-03-03 01:16:04.926874 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-03 01:16:04.926878 | orchestrator | Tuesday 03 March 2026 01:14:13 +0000 (0:00:01.567) 0:00:08.908 ********* 2026-03-03 01:16:04.926882 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.926886 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.926890 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.926893 | orchestrator | 2026-03-03 01:16:04.926897 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-03 01:16:04.926901 | orchestrator | Tuesday 03 March 2026 01:14:13 +0000 (0:00:00.463) 0:00:09.371 ********* 2026-03-03 01:16:04.926905 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-03 01:16:04.926910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-03 01:16:04.926914 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-03 01:16:04.926918 | orchestrator | 2026-03-03 01:16:04.926921 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-03 01:16:04.926925 | orchestrator | Tuesday 03 March 2026 01:14:14 +0000 (0:00:01.237) 0:00:10.609 ********* 2026-03-03 01:16:04.926929 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-03 01:16:04.926933 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-03 01:16:04.926937 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-03 01:16:04.926941 | orchestrator | 2026-03-03 01:16:04.926945 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-03 01:16:04.926948 | orchestrator | Tuesday 03 March 2026 01:14:16 +0000 (0:00:01.468) 0:00:12.078 ********* 2026-03-03 01:16:04.926961 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:16:04.926967 | orchestrator | 2026-03-03 01:16:04.926974 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-03 01:16:04.926980 | orchestrator | Tuesday 03 March 2026 01:14:17 +0000 (0:00:01.102) 0:00:13.180 ********* 2026-03-03 01:16:04.926985 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-03 01:16:04.926996 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-03 01:16:04.927004 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:04.927010 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:16:04.927015 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:16:04.927022 | orchestrator | 2026-03-03 01:16:04.927028 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-03 01:16:04.927034 | orchestrator | Tuesday 03 March 2026 01:14:18 +0000 (0:00:00.907) 0:00:14.088 ********* 2026-03-03 01:16:04.927041 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.927047 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.927053 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.927060 | orchestrator | 2026-03-03 01:16:04.927067 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-03 01:16:04.927073 | orchestrator | Tuesday 03 March 2026 01:14:19 +0000 (0:00:00.788) 0:00:14.876 ********* 2026-03-03 01:16:04.927080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1099490, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5889475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1099490, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5889475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1099490, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5889475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1099524, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5947068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1099524, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5947068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1099524, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5947068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1099585, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6031713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1099585, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6031713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1099585, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6031713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099512, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5926867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099512, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5926867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099512, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5926867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1099588, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6036103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1099588, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6036103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1099588, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6036103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1099498, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.589949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1099498, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.589949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1099498, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.589949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1099554, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5976658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1099554, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5976658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1099554, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5976658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1099576, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.602238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1099576, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.602238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1099576, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.602238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099489, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5882401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099489, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5882401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099489, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5882401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099497, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5894933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099497, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5894933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099497, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5894933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099520, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5928912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099520, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5928912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099520, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5928912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1099557, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5992732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1099557, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5992732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1099557, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5992732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1099584, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6027913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1099584, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6027913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1099584, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6027913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099503, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5916116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099503, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5916116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099503, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5916116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1099568, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6013584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1099568, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6013584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1099568, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6013584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1099593, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6047275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1099593, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6047275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1099593, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6047275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1099555, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5985596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1099555, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5985596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1099555, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5985596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1099550, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5976658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1099550, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5976658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1099550, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5976658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1099544, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5961225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1099544, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5961225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1099544, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5961225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1099562, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6002252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1099562, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6002252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1099562, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6002252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1099540, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5950532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1099540, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5950532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1099540, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5950532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1099582, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6027913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1099582, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6027913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1099582, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6027913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1099500, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5903182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1099500, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5903182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1099500, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.5903182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099757, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6329677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099757, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6329677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099757, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6329677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099630, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6164622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099630, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6164622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099630, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6164622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099616, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6078577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099616, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6078577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099616, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6078577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1099671, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.619016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1099671, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.619016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1099671, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.619016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099599, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6054235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099599, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6054235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099599, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6054235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099716, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6271522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099716, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6271522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099716, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6271522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099675, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6248066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099675, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6248066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099675, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6248066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1099721, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6271522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1099721, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6271522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1099721, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6271522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099750, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6322765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099750, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6322765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099750, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6322765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1099714, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6264095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1099714, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6264095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1099714, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6264095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099660, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6180034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099660, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6180034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099660, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6180034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099629, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6106105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099629, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6106105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099629, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6106105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099655, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6173105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099655, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6173105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099655, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6173105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099626, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6106105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099626, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6106105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099626, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6106105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1099663, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6184568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1099663, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6184568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1099663, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6184568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099738, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6313434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099738, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6313434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099738, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6313434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099729, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6294513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099729, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6294513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099729, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6294513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099605, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6069043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099605, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6069043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.927998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099605, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6069043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099612, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6078577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099612, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6078577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099612, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6078577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099707, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6259289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099707, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6259289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099707, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6259289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1099725, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6281781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1099725, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6281781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1099725, 'dev': 109, 'nlink': 1, 'atime': 1772496139.0, 'mtime': 1772496139.0, 'ctime': 1772497466.6281781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-03 01:16:04.928071 | orchestrator | 2026-03-03 01:16:04.928075 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-03 01:16:04.928080 | orchestrator | Tuesday 03 March 2026 01:14:56 +0000 (0:00:37.108) 0:00:51.984 ********* 2026-03-03 01:16:04.928084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.928091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.928095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-03 01:16:04.928099 | orchestrator | 2026-03-03 01:16:04.928103 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-03 01:16:04.928107 | orchestrator | Tuesday 03 March 2026 01:14:57 +0000 (0:00:00.993) 0:00:52.978 ********* 2026-03-03 01:16:04.928111 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:04.928116 | orchestrator | 2026-03-03 01:16:04.928120 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-03 01:16:04.928124 | orchestrator | Tuesday 03 March 2026 01:14:59 +0000 (0:00:02.382) 0:00:55.360 ********* 2026-03-03 01:16:04.928127 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:04.928131 | orchestrator | 2026-03-03 01:16:04.928135 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-03 01:16:04.928139 | orchestrator | Tuesday 03 March 2026 01:15:02 +0000 (0:00:02.295) 0:00:57.656 ********* 2026-03-03 01:16:04.928142 | orchestrator | 2026-03-03 01:16:04.928146 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-03 01:16:04.928150 | orchestrator | Tuesday 03 March 2026 01:15:02 +0000 (0:00:00.065) 0:00:57.721 ********* 2026-03-03 01:16:04.928154 | orchestrator | 2026-03-03 01:16:04.928158 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-03 01:16:04.928161 | orchestrator | Tuesday 03 March 2026 01:15:02 +0000 (0:00:00.218) 0:00:57.939 ********* 2026-03-03 01:16:04.928165 | orchestrator | 2026-03-03 01:16:04.928169 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-03 01:16:04.928173 | orchestrator | Tuesday 03 March 2026 01:15:02 +0000 (0:00:00.067) 0:00:58.007 ********* 2026-03-03 01:16:04.928176 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.928182 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.928186 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:04.928189 | orchestrator | 2026-03-03 01:16:04.928193 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-03 01:16:04.928197 | orchestrator | Tuesday 03 March 2026 01:15:04 +0000 (0:00:01.727) 0:00:59.734 ********* 2026-03-03 01:16:04.928201 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.928205 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.928209 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-03 01:16:04.928216 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-03 01:16:04.928220 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:04.928225 | orchestrator | 2026-03-03 01:16:04.928228 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-03 01:16:04.928233 | orchestrator | Tuesday 03 March 2026 01:15:30 +0000 (0:00:26.819) 0:01:26.554 ********* 2026-03-03 01:16:04.928236 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.928240 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:04.928244 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:04.928248 | orchestrator | 2026-03-03 01:16:04.928251 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-03 01:16:04.928255 | orchestrator | Tuesday 03 March 2026 01:15:57 +0000 (0:00:26.624) 0:01:53.179 ********* 2026-03-03 01:16:04.928259 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:04.928265 | orchestrator | 2026-03-03 01:16:04.928271 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-03 01:16:04.928277 | orchestrator | Tuesday 03 March 2026 01:15:59 +0000 (0:00:02.051) 0:01:55.230 ********* 2026-03-03 01:16:04.928283 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.928289 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:04.928294 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:04.928300 | orchestrator | 2026-03-03 01:16:04.928305 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-03 01:16:04.928311 | orchestrator | Tuesday 03 March 2026 01:16:00 +0000 (0:00:00.467) 0:01:55.697 ********* 2026-03-03 01:16:04.928317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-03 01:16:04.928325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-03 01:16:04.928331 | orchestrator | 2026-03-03 01:16:04.928336 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-03 01:16:04.928342 | orchestrator | Tuesday 03 March 2026 01:16:02 +0000 (0:00:02.215) 0:01:57.913 ********* 2026-03-03 01:16:04.928348 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:04.928354 | orchestrator | 2026-03-03 01:16:04.928360 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:16:04.928365 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:16:04.928372 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:16:04.928377 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:16:04.928382 | orchestrator | 2026-03-03 01:16:04.928389 | orchestrator | 2026-03-03 01:16:04.928395 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:16:04.928400 | orchestrator | Tuesday 03 March 2026 01:16:02 +0000 (0:00:00.257) 0:01:58.170 ********* 2026-03-03 01:16:04.928406 | orchestrator | =============================================================================== 2026-03-03 01:16:04.928411 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.11s 2026-03-03 01:16:04.928417 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.82s 2026-03-03 01:16:04.928431 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.62s 2026-03-03 01:16:04.928524 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.38s 2026-03-03 01:16:04.928540 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.30s 2026-03-03 01:16:04.928546 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.22s 2026-03-03 01:16:04.928552 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.05s 2026-03-03 01:16:04.928558 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.73s 2026-03-03 01:16:04.928568 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2026-03-03 01:16:04.928576 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.47s 2026-03-03 01:16:04.928582 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.40s 2026-03-03 01:16:04.928588 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-03-03 01:16:04.928600 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2026-03-03 01:16:04.928606 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.10s 2026-03-03 01:16:04.928612 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2026-03-03 01:16:04.928618 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.93s 2026-03-03 01:16:04.928624 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.91s 2026-03-03 01:16:04.928630 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.79s 2026-03-03 01:16:04.928635 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.78s 2026-03-03 01:16:04.928641 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2026-03-03 01:16:04.928647 | orchestrator | 2026-03-03 01:16:04 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:04.928654 | orchestrator | 2026-03-03 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:07.973343 | orchestrator | 2026-03-03 01:16:07 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:07.974966 | orchestrator | 2026-03-03 01:16:07 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:07.975028 | orchestrator | 2026-03-03 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:11.026829 | orchestrator | 2026-03-03 01:16:11 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:11.029200 | orchestrator | 2026-03-03 01:16:11 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:11.029265 | orchestrator | 2026-03-03 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:14.073372 | orchestrator | 2026-03-03 01:16:14 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:14.073496 | orchestrator | 2026-03-03 01:16:14 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:14.073506 | orchestrator | 2026-03-03 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:17.119770 | orchestrator | 2026-03-03 01:16:17 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:17.121569 | orchestrator | 2026-03-03 01:16:17 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:17.121638 | orchestrator | 2026-03-03 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:20.162609 | orchestrator | 2026-03-03 01:16:20 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state STARTED 2026-03-03 01:16:20.165055 | orchestrator | 2026-03-03 01:16:20 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:20.165149 | orchestrator | 2026-03-03 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:23.208124 | orchestrator | 2026-03-03 01:16:23 | INFO  | Task c4c04457-c428-44c3-a16b-7cd18cb6bb55 is in state SUCCESS 2026-03-03 01:16:23.210616 | orchestrator | 2026-03-03 01:16:23.210694 | orchestrator | 2026-03-03 01:16:23.210706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:16:23.210714 | orchestrator | 2026-03-03 01:16:23.210722 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-03 01:16:23.210729 | orchestrator | Tuesday 03 March 2026 01:07:47 +0000 (0:00:00.281) 0:00:00.281 ********* 2026-03-03 01:16:23.210736 | orchestrator | changed: [testbed-manager] 2026-03-03 01:16:23.210745 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.210751 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.210759 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.210766 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.210773 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.210779 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.210786 | orchestrator | 2026-03-03 01:16:23.210793 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:16:23.210799 | orchestrator | Tuesday 03 March 2026 01:07:48 +0000 (0:00:00.934) 0:00:01.215 ********* 2026-03-03 01:16:23.210806 | orchestrator | changed: [testbed-manager] 2026-03-03 01:16:23.210813 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.210819 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.210826 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.210832 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.210839 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.210845 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.210852 | orchestrator | 2026-03-03 01:16:23.210858 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:16:23.210865 | orchestrator | Tuesday 03 March 2026 01:07:49 +0000 (0:00:00.878) 0:00:02.093 ********* 2026-03-03 01:16:23.210872 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-03 01:16:23.210878 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-03 01:16:23.210885 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-03 01:16:23.210891 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-03 01:16:23.210897 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-03 01:16:23.210904 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-03 01:16:23.210910 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-03 01:16:23.210916 | orchestrator | 2026-03-03 01:16:23.210922 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-03 01:16:23.210929 | orchestrator | 2026-03-03 01:16:23.210935 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-03 01:16:23.210941 | orchestrator | Tuesday 03 March 2026 01:07:50 +0000 (0:00:01.215) 0:00:03.309 ********* 2026-03-03 01:16:23.210948 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.210954 | orchestrator | 2026-03-03 01:16:23.210961 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-03 01:16:23.210968 | orchestrator | Tuesday 03 March 2026 01:07:51 +0000 (0:00:00.759) 0:00:04.068 ********* 2026-03-03 01:16:23.210976 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-03 01:16:23.210983 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-03 01:16:23.210989 | orchestrator | 2026-03-03 01:16:23.210995 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-03 01:16:23.211001 | orchestrator | Tuesday 03 March 2026 01:07:55 +0000 (0:00:03.858) 0:00:07.927 ********* 2026-03-03 01:16:23.211008 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:16:23.211178 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-03 01:16:23.211191 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.211200 | orchestrator | 2026-03-03 01:16:23.211209 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-03 01:16:23.211217 | orchestrator | Tuesday 03 March 2026 01:07:58 +0000 (0:00:03.571) 0:00:11.499 ********* 2026-03-03 01:16:23.211226 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.211234 | orchestrator | 2026-03-03 01:16:23.211242 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-03 01:16:23.211250 | orchestrator | Tuesday 03 March 2026 01:07:59 +0000 (0:00:00.767) 0:00:12.266 ********* 2026-03-03 01:16:23.211258 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.211266 | orchestrator | 2026-03-03 01:16:23.211274 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-03 01:16:23.211282 | orchestrator | Tuesday 03 March 2026 01:08:01 +0000 (0:00:01.424) 0:00:13.691 ********* 2026-03-03 01:16:23.211290 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.211624 | orchestrator | 2026-03-03 01:16:23.211635 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-03 01:16:23.211643 | orchestrator | Tuesday 03 March 2026 01:08:04 +0000 (0:00:03.071) 0:00:16.762 ********* 2026-03-03 01:16:23.211651 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.211659 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.211666 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.211673 | orchestrator | 2026-03-03 01:16:23.211681 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-03 01:16:23.211688 | orchestrator | Tuesday 03 March 2026 01:08:04 +0000 (0:00:00.411) 0:00:17.174 ********* 2026-03-03 01:16:23.211731 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.211740 | orchestrator | 2026-03-03 01:16:23.211747 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-03 01:16:23.211754 | orchestrator | Tuesday 03 March 2026 01:08:33 +0000 (0:00:28.558) 0:00:45.732 ********* 2026-03-03 01:16:23.211760 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.211767 | orchestrator | 2026-03-03 01:16:23.211774 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-03 01:16:23.211912 | orchestrator | Tuesday 03 March 2026 01:08:46 +0000 (0:00:13.215) 0:00:58.947 ********* 2026-03-03 01:16:23.211920 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.211926 | orchestrator | 2026-03-03 01:16:23.211933 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-03 01:16:23.211940 | orchestrator | Tuesday 03 March 2026 01:08:58 +0000 (0:00:12.346) 0:01:11.294 ********* 2026-03-03 01:16:23.211994 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.212003 | orchestrator | 2026-03-03 01:16:23.212011 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-03 01:16:23.212018 | orchestrator | Tuesday 03 March 2026 01:08:59 +0000 (0:00:01.022) 0:01:12.316 ********* 2026-03-03 01:16:23.212024 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.212032 | orchestrator | 2026-03-03 01:16:23.212039 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-03 01:16:23.212047 | orchestrator | Tuesday 03 March 2026 01:09:00 +0000 (0:00:00.467) 0:01:12.784 ********* 2026-03-03 01:16:23.212054 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.212060 | orchestrator | 2026-03-03 01:16:23.212066 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-03 01:16:23.212104 | orchestrator | Tuesday 03 March 2026 01:09:00 +0000 (0:00:00.642) 0:01:13.426 ********* 2026-03-03 01:16:23.212112 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.212169 | orchestrator | 2026-03-03 01:16:23.212177 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-03 01:16:23.212185 | orchestrator | Tuesday 03 March 2026 01:09:17 +0000 (0:00:16.666) 0:01:30.093 ********* 2026-03-03 01:16:23.212235 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.212243 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.212267 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.212274 | orchestrator | 2026-03-03 01:16:23.212280 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-03 01:16:23.212287 | orchestrator | 2026-03-03 01:16:23.212294 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-03 01:16:23.212300 | orchestrator | Tuesday 03 March 2026 01:09:17 +0000 (0:00:00.317) 0:01:30.411 ********* 2026-03-03 01:16:23.212307 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.212313 | orchestrator | 2026-03-03 01:16:23.212320 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-03 01:16:23.212327 | orchestrator | Tuesday 03 March 2026 01:09:18 +0000 (0:00:00.559) 0:01:30.970 ********* 2026-03-03 01:16:23.212527 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.212534 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.212540 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.212546 | orchestrator | 2026-03-03 01:16:23.212553 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-03 01:16:23.212559 | orchestrator | Tuesday 03 March 2026 01:09:20 +0000 (0:00:02.176) 0:01:33.146 ********* 2026-03-03 01:16:23.212565 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.212570 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.212576 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.212582 | orchestrator | 2026-03-03 01:16:23.212588 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-03 01:16:23.212593 | orchestrator | Tuesday 03 March 2026 01:09:23 +0000 (0:00:02.679) 0:01:35.826 ********* 2026-03-03 01:16:23.212599 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.212605 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.212611 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.212617 | orchestrator | 2026-03-03 01:16:23.212624 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-03 01:16:23.212682 | orchestrator | Tuesday 03 March 2026 01:09:24 +0000 (0:00:00.897) 0:01:36.723 ********* 2026-03-03 01:16:23.212690 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-03 01:16:23.212698 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.212704 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-03 01:16:23.212948 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.212957 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-03 01:16:23.212964 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-03 01:16:23.212971 | orchestrator | 2026-03-03 01:16:23.212978 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-03 01:16:23.212985 | orchestrator | Tuesday 03 March 2026 01:09:32 +0000 (0:00:08.439) 0:01:45.163 ********* 2026-03-03 01:16:23.212992 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.212999 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213006 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213013 | orchestrator | 2026-03-03 01:16:23.213019 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-03 01:16:23.213025 | orchestrator | Tuesday 03 March 2026 01:09:32 +0000 (0:00:00.350) 0:01:45.514 ********* 2026-03-03 01:16:23.213031 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-03 01:16:23.213037 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213043 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-03 01:16:23.213049 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.213055 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-03 01:16:23.213062 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213069 | orchestrator | 2026-03-03 01:16:23.213076 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-03 01:16:23.213094 | orchestrator | Tuesday 03 March 2026 01:09:34 +0000 (0:00:01.028) 0:01:46.542 ********* 2026-03-03 01:16:23.213101 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213108 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213114 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.213121 | orchestrator | 2026-03-03 01:16:23.213128 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-03 01:16:23.213135 | orchestrator | Tuesday 03 March 2026 01:09:34 +0000 (0:00:00.864) 0:01:47.407 ********* 2026-03-03 01:16:23.213142 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213149 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213155 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.213162 | orchestrator | 2026-03-03 01:16:23.213169 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-03 01:16:23.213175 | orchestrator | Tuesday 03 March 2026 01:09:35 +0000 (0:00:01.040) 0:01:48.447 ********* 2026-03-03 01:16:23.213182 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213188 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213273 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.213282 | orchestrator | 2026-03-03 01:16:23.213289 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-03 01:16:23.213295 | orchestrator | Tuesday 03 March 2026 01:09:38 +0000 (0:00:02.926) 0:01:51.374 ********* 2026-03-03 01:16:23.213302 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213309 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213315 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.213321 | orchestrator | 2026-03-03 01:16:23.213328 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-03 01:16:23.213335 | orchestrator | Tuesday 03 March 2026 01:10:00 +0000 (0:00:21.721) 0:02:13.095 ********* 2026-03-03 01:16:23.213342 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213349 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213355 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.213362 | orchestrator | 2026-03-03 01:16:23.213368 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-03 01:16:23.213374 | orchestrator | Tuesday 03 March 2026 01:10:13 +0000 (0:00:12.688) 0:02:25.784 ********* 2026-03-03 01:16:23.213380 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213387 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.213393 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213400 | orchestrator | 2026-03-03 01:16:23.213406 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-03 01:16:23.213413 | orchestrator | Tuesday 03 March 2026 01:10:14 +0000 (0:00:00.920) 0:02:26.704 ********* 2026-03-03 01:16:23.213420 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213427 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213433 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.213439 | orchestrator | 2026-03-03 01:16:23.213494 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-03 01:16:23.213502 | orchestrator | Tuesday 03 March 2026 01:10:27 +0000 (0:00:13.213) 0:02:39.918 ********* 2026-03-03 01:16:23.213508 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.213515 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213521 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213527 | orchestrator | 2026-03-03 01:16:23.213532 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-03 01:16:23.213538 | orchestrator | Tuesday 03 March 2026 01:10:28 +0000 (0:00:00.997) 0:02:40.916 ********* 2026-03-03 01:16:23.213544 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.213551 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.213558 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.213565 | orchestrator | 2026-03-03 01:16:23.213572 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-03 01:16:23.213579 | orchestrator | 2026-03-03 01:16:23.213586 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-03 01:16:23.213602 | orchestrator | Tuesday 03 March 2026 01:10:28 +0000 (0:00:00.512) 0:02:41.428 ********* 2026-03-03 01:16:23.213609 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.213617 | orchestrator | 2026-03-03 01:16:23.213624 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-03 01:16:23.213631 | orchestrator | Tuesday 03 March 2026 01:10:29 +0000 (0:00:00.542) 0:02:41.971 ********* 2026-03-03 01:16:23.213638 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-03 01:16:23.213644 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-03 01:16:23.213651 | orchestrator | 2026-03-03 01:16:23.213658 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-03 01:16:23.213665 | orchestrator | Tuesday 03 March 2026 01:10:32 +0000 (0:00:03.150) 0:02:45.121 ********* 2026-03-03 01:16:23.213672 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-03 01:16:23.213681 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-03 01:16:23.213687 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-03 01:16:23.213694 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-03 01:16:23.213701 | orchestrator | 2026-03-03 01:16:23.213707 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-03 01:16:23.213714 | orchestrator | Tuesday 03 March 2026 01:10:38 +0000 (0:00:06.327) 0:02:51.449 ********* 2026-03-03 01:16:23.213720 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:16:23.213727 | orchestrator | 2026-03-03 01:16:23.213733 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-03 01:16:23.213740 | orchestrator | Tuesday 03 March 2026 01:10:42 +0000 (0:00:03.304) 0:02:54.754 ********* 2026-03-03 01:16:23.213746 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-03 01:16:23.213753 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:16:23.213760 | orchestrator | 2026-03-03 01:16:23.213766 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-03 01:16:23.213773 | orchestrator | Tuesday 03 March 2026 01:10:46 +0000 (0:00:03.889) 0:02:58.643 ********* 2026-03-03 01:16:23.213780 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:16:23.213786 | orchestrator | 2026-03-03 01:16:23.213792 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-03 01:16:23.213799 | orchestrator | Tuesday 03 March 2026 01:10:49 +0000 (0:00:03.017) 0:03:01.660 ********* 2026-03-03 01:16:23.213805 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-03 01:16:23.213812 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-03 01:16:23.213818 | orchestrator | 2026-03-03 01:16:23.213825 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-03 01:16:23.213900 | orchestrator | Tuesday 03 March 2026 01:10:55 +0000 (0:00:06.547) 0:03:08.208 ********* 2026-03-03 01:16:23.213916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.213936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.213943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.213975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.213985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.213998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214006 | orchestrator | 2026-03-03 01:16:23.214047 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-03 01:16:23.214056 | orchestrator | Tuesday 03 March 2026 01:10:57 +0000 (0:00:01.405) 0:03:09.613 ********* 2026-03-03 01:16:23.214063 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.214070 | orchestrator | 2026-03-03 01:16:23.214078 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-03 01:16:23.214085 | orchestrator | Tuesday 03 March 2026 01:10:57 +0000 (0:00:00.254) 0:03:09.868 ********* 2026-03-03 01:16:23.214092 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.214099 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.214107 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.214114 | orchestrator | 2026-03-03 01:16:23.214121 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-03 01:16:23.214128 | orchestrator | Tuesday 03 March 2026 01:10:58 +0000 (0:00:01.107) 0:03:10.976 ********* 2026-03-03 01:16:23.214135 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-03 01:16:23.214143 | orchestrator | 2026-03-03 01:16:23.214150 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-03 01:16:23.214157 | orchestrator | Tuesday 03 March 2026 01:10:59 +0000 (0:00:01.493) 0:03:12.469 ********* 2026-03-03 01:16:23.214165 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.214172 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.214179 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.214186 | orchestrator | 2026-03-03 01:16:23.214193 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-03 01:16:23.214200 | orchestrator | Tuesday 03 March 2026 01:11:00 +0000 (0:00:00.652) 0:03:13.121 ********* 2026-03-03 01:16:23.214208 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.214215 | orchestrator | 2026-03-03 01:16:23.214222 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-03 01:16:23.214229 | orchestrator | Tuesday 03 March 2026 01:11:01 +0000 (0:00:01.189) 0:03:14.311 ********* 2026-03-03 01:16:23.214236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214337 | orchestrator | 2026-03-03 01:16:23.214344 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-03 01:16:23.214351 | orchestrator | Tuesday 03 March 2026 01:11:05 +0000 (0:00:03.467) 0:03:17.779 ********* 2026-03-03 01:16:23.214359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214374 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.214380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214398 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.214426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214459 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.214465 | orchestrator | 2026-03-03 01:16:23.214471 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-03 01:16:23.214477 | orchestrator | Tuesday 03 March 2026 01:11:06 +0000 (0:00:01.190) 0:03:18.969 ********* 2026-03-03 01:16:23.214484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214504 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.214531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214546 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.214579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214602 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.214610 | orchestrator | 2026-03-03 01:16:23.214618 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-03 01:16:23.214626 | orchestrator | Tuesday 03 March 2026 01:11:08 +0000 (0:00:01.690) 0:03:20.660 ********* 2026-03-03 01:16:23.214658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214737 | orchestrator | 2026-03-03 01:16:23.214744 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-03 01:16:23.214752 | orchestrator | Tuesday 03 March 2026 01:11:11 +0000 (0:00:03.842) 0:03:24.503 ********* 2026-03-03 01:16:23.214761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.214810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.214834 | orchestrator | 2026-03-03 01:16:23.214841 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-03 01:16:23.214846 | orchestrator | Tuesday 03 March 2026 01:11:22 +0000 (0:00:10.093) 0:03:34.596 ********* 2026-03-03 01:16:23.214853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214900 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.214907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214920 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.214926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-03 01:16:23.214942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.214948 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.214955 | orchestrator | 2026-03-03 01:16:23.214961 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-03 01:16:23.214967 | orchestrator | Tuesday 03 March 2026 01:11:23 +0000 (0:00:01.026) 0:03:35.622 ********* 2026-03-03 01:16:23.214973 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.214979 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.214984 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.214991 | orchestrator | 2026-03-03 01:16:23.215018 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-03 01:16:23.215026 | orchestrator | Tuesday 03 March 2026 01:11:24 +0000 (0:00:01.574) 0:03:37.197 ********* 2026-03-03 01:16:23.215033 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.215040 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.215047 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.215053 | orchestrator | 2026-03-03 01:16:23.215060 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-03 01:16:23.215066 | orchestrator | Tuesday 03 March 2026 01:11:25 +0000 (0:00:00.362) 0:03:37.559 ********* 2026-03-03 01:16:23.215074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.215081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.215112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-03 01:16:23.215120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215146 | orchestrator | 2026-03-03 01:16:23.215153 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-03 01:16:23.215160 | orchestrator | Tuesday 03 March 2026 01:11:27 +0000 (0:00:02.714) 0:03:40.274 ********* 2026-03-03 01:16:23.215166 | orchestrator | 2026-03-03 01:16:23.215173 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-03 01:16:23.215179 | orchestrator | Tuesday 03 March 2026 01:11:28 +0000 (0:00:00.389) 0:03:40.664 ********* 2026-03-03 01:16:23.215185 | orchestrator | 2026-03-03 01:16:23.215192 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-03 01:16:23.215198 | orchestrator | Tuesday 03 March 2026 01:11:28 +0000 (0:00:00.303) 0:03:40.967 ********* 2026-03-03 01:16:23.215205 | orchestrator | 2026-03-03 01:16:23.215211 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-03 01:16:23.215218 | orchestrator | Tuesday 03 March 2026 01:11:28 +0000 (0:00:00.255) 0:03:41.223 ********* 2026-03-03 01:16:23.215225 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.215231 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.215238 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.215245 | orchestrator | 2026-03-03 01:16:23.215251 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-03 01:16:23.215257 | orchestrator | Tuesday 03 March 2026 01:11:48 +0000 (0:00:19.900) 0:04:01.123 ********* 2026-03-03 01:16:23.215263 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.215270 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.215276 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.215283 | orchestrator | 2026-03-03 01:16:23.215290 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-03 01:16:23.215296 | orchestrator | 2026-03-03 01:16:23.215303 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-03 01:16:23.215309 | orchestrator | Tuesday 03 March 2026 01:12:00 +0000 (0:00:11.425) 0:04:12.548 ********* 2026-03-03 01:16:23.215317 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.215324 | orchestrator | 2026-03-03 01:16:23.215331 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-03 01:16:23.215338 | orchestrator | Tuesday 03 March 2026 01:12:02 +0000 (0:00:02.850) 0:04:15.399 ********* 2026-03-03 01:16:23.215344 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.215350 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.215357 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.215364 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.215370 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.215376 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.215383 | orchestrator | 2026-03-03 01:16:23.215390 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-03 01:16:23.215396 | orchestrator | Tuesday 03 March 2026 01:12:03 +0000 (0:00:00.746) 0:04:16.146 ********* 2026-03-03 01:16:23.215403 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.215409 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.215416 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.215423 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:16:23.215429 | orchestrator | 2026-03-03 01:16:23.215436 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-03 01:16:23.215487 | orchestrator | Tuesday 03 March 2026 01:12:05 +0000 (0:00:01.886) 0:04:18.032 ********* 2026-03-03 01:16:23.215496 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-03 01:16:23.215503 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-03 01:16:23.215516 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-03 01:16:23.215522 | orchestrator | 2026-03-03 01:16:23.215528 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-03 01:16:23.215534 | orchestrator | Tuesday 03 March 2026 01:12:06 +0000 (0:00:00.691) 0:04:18.724 ********* 2026-03-03 01:16:23.215540 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-03 01:16:23.215546 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-03 01:16:23.215553 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-03 01:16:23.215559 | orchestrator | 2026-03-03 01:16:23.215566 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-03 01:16:23.215573 | orchestrator | Tuesday 03 March 2026 01:12:07 +0000 (0:00:01.078) 0:04:19.802 ********* 2026-03-03 01:16:23.215580 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-03 01:16:23.215587 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.215594 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-03 01:16:23.215600 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.215607 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-03 01:16:23.215614 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.215620 | orchestrator | 2026-03-03 01:16:23.215626 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-03 01:16:23.215633 | orchestrator | Tuesday 03 March 2026 01:12:07 +0000 (0:00:00.632) 0:04:20.435 ********* 2026-03-03 01:16:23.215640 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 01:16:23.215646 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 01:16:23.215653 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.215660 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 01:16:23.215666 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 01:16:23.215673 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-03 01:16:23.215680 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.215687 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-03 01:16:23.215693 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-03 01:16:23.215700 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-03 01:16:23.215706 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.215713 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-03 01:16:23.215720 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-03 01:16:23.215726 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-03 01:16:23.215733 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-03 01:16:23.215739 | orchestrator | 2026-03-03 01:16:23.215746 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-03 01:16:23.215753 | orchestrator | Tuesday 03 March 2026 01:12:09 +0000 (0:00:01.986) 0:04:22.422 ********* 2026-03-03 01:16:23.215760 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.215766 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.215773 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.215779 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.215786 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.215793 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.215799 | orchestrator | 2026-03-03 01:16:23.215806 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-03 01:16:23.215812 | orchestrator | Tuesday 03 March 2026 01:12:10 +0000 (0:00:01.080) 0:04:23.502 ********* 2026-03-03 01:16:23.215819 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.215831 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.215838 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.215844 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.215851 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.215858 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.215864 | orchestrator | 2026-03-03 01:16:23.215871 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-03 01:16:23.215877 | orchestrator | Tuesday 03 March 2026 01:12:13 +0000 (0:00:02.353) 0:04:25.855 ********* 2026-03-03 01:16:23.215886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.215994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216076 | orchestrator | 2026-03-03 01:16:23.216083 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-03 01:16:23.216090 | orchestrator | Tuesday 03 March 2026 01:12:16 +0000 (0:00:03.588) 0:04:29.444 ********* 2026-03-03 01:16:23.216097 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:16:23.216105 | orchestrator | 2026-03-03 01:16:23.216111 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-03 01:16:23.216118 | orchestrator | Tuesday 03 March 2026 01:12:18 +0000 (0:00:01.287) 0:04:30.731 ********* 2026-03-03 01:16:23.216125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.216294 | orchestrator | 2026-03-03 01:16:23.216300 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-03 01:16:23.216307 | orchestrator | Tuesday 03 March 2026 01:12:21 +0000 (0:00:03.211) 0:04:33.943 ********* 2026-03-03 01:16:23.216331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.216339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.216347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.216367 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.216374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.216398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216406 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.216427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.216434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.216590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216603 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.216611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.216618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216625 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.216659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.216673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216680 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.216687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.216700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216707 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.216714 | orchestrator | 2026-03-03 01:16:23.216720 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-03 01:16:23.216727 | orchestrator | Tuesday 03 March 2026 01:12:23 +0000 (0:00:01.747) 0:04:35.690 ********* 2026-03-03 01:16:23.216734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.216740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.216774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216782 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.216789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.216801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.216808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216815 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.216821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.216828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.216857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216870 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.216877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.216884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216890 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.216896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.216902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216908 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.216915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.216944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.216957 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.216964 | orchestrator | 2026-03-03 01:16:23.216971 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-03 01:16:23.216977 | orchestrator | Tuesday 03 March 2026 01:12:26 +0000 (0:00:03.026) 0:04:38.716 ********* 2026-03-03 01:16:23.216984 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.216991 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.216998 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.217005 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:16:23.217012 | orchestrator | 2026-03-03 01:16:23.217018 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-03 01:16:23.217025 | orchestrator | Tuesday 03 March 2026 01:12:27 +0000 (0:00:01.009) 0:04:39.726 ********* 2026-03-03 01:16:23.217031 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 01:16:23.217038 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-03 01:16:23.217045 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-03 01:16:23.217051 | orchestrator | 2026-03-03 01:16:23.217058 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-03 01:16:23.217064 | orchestrator | Tuesday 03 March 2026 01:12:28 +0000 (0:00:00.913) 0:04:40.640 ********* 2026-03-03 01:16:23.217070 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 01:16:23.217077 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-03 01:16:23.217083 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-03 01:16:23.217090 | orchestrator | 2026-03-03 01:16:23.217096 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-03 01:16:23.217103 | orchestrator | Tuesday 03 March 2026 01:12:29 +0000 (0:00:00.942) 0:04:41.582 ********* 2026-03-03 01:16:23.217109 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:16:23.217116 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:16:23.217122 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:16:23.217128 | orchestrator | 2026-03-03 01:16:23.217135 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-03 01:16:23.217141 | orchestrator | Tuesday 03 March 2026 01:12:29 +0000 (0:00:00.661) 0:04:42.243 ********* 2026-03-03 01:16:23.217148 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:16:23.217154 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:16:23.217161 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:16:23.217167 | orchestrator | 2026-03-03 01:16:23.217174 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-03 01:16:23.217181 | orchestrator | Tuesday 03 March 2026 01:12:30 +0000 (0:00:00.637) 0:04:42.881 ********* 2026-03-03 01:16:23.217187 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-03 01:16:23.217194 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-03 01:16:23.217200 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-03 01:16:23.217207 | orchestrator | 2026-03-03 01:16:23.217213 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-03 01:16:23.217220 | orchestrator | Tuesday 03 March 2026 01:12:31 +0000 (0:00:01.098) 0:04:43.979 ********* 2026-03-03 01:16:23.217227 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-03 01:16:23.217234 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-03 01:16:23.217240 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-03 01:16:23.217247 | orchestrator | 2026-03-03 01:16:23.217253 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-03 01:16:23.217260 | orchestrator | Tuesday 03 March 2026 01:12:32 +0000 (0:00:01.003) 0:04:44.983 ********* 2026-03-03 01:16:23.217267 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-03 01:16:23.217273 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-03 01:16:23.217280 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-03 01:16:23.217287 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-03 01:16:23.217298 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-03 01:16:23.217304 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-03 01:16:23.217311 | orchestrator | 2026-03-03 01:16:23.217317 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-03 01:16:23.217324 | orchestrator | Tuesday 03 March 2026 01:12:36 +0000 (0:00:03.857) 0:04:48.840 ********* 2026-03-03 01:16:23.217330 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.217337 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.217344 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.217350 | orchestrator | 2026-03-03 01:16:23.217356 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-03 01:16:23.217363 | orchestrator | Tuesday 03 March 2026 01:12:37 +0000 (0:00:00.722) 0:04:49.563 ********* 2026-03-03 01:16:23.217370 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.217376 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.217382 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.217389 | orchestrator | 2026-03-03 01:16:23.217396 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-03 01:16:23.217403 | orchestrator | Tuesday 03 March 2026 01:12:37 +0000 (0:00:00.397) 0:04:49.960 ********* 2026-03-03 01:16:23.217410 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.217417 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.217423 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.217430 | orchestrator | 2026-03-03 01:16:23.217437 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-03 01:16:23.217461 | orchestrator | Tuesday 03 March 2026 01:12:39 +0000 (0:00:01.763) 0:04:51.724 ********* 2026-03-03 01:16:23.217492 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-03 01:16:23.217502 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-03 01:16:23.217512 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-03 01:16:23.217519 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-03 01:16:23.217525 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-03 01:16:23.217531 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-03 01:16:23.217536 | orchestrator | 2026-03-03 01:16:23.217543 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-03 01:16:23.217548 | orchestrator | Tuesday 03 March 2026 01:12:42 +0000 (0:00:03.052) 0:04:54.777 ********* 2026-03-03 01:16:23.217555 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 01:16:23.217561 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 01:16:23.217568 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 01:16:23.217575 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-03 01:16:23.217581 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.217588 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-03 01:16:23.217595 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.217601 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-03 01:16:23.217608 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.217615 | orchestrator | 2026-03-03 01:16:23.217622 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-03 01:16:23.217629 | orchestrator | Tuesday 03 March 2026 01:12:45 +0000 (0:00:03.403) 0:04:58.181 ********* 2026-03-03 01:16:23.217635 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.217647 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.217654 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.217660 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-03 01:16:23.217667 | orchestrator | 2026-03-03 01:16:23.217673 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-03 01:16:23.217680 | orchestrator | Tuesday 03 March 2026 01:12:48 +0000 (0:00:02.347) 0:05:00.529 ********* 2026-03-03 01:16:23.217687 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-03 01:16:23.217693 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 01:16:23.217700 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-03 01:16:23.217706 | orchestrator | 2026-03-03 01:16:23.217713 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-03 01:16:23.217720 | orchestrator | Tuesday 03 March 2026 01:12:49 +0000 (0:00:01.811) 0:05:02.340 ********* 2026-03-03 01:16:23.217727 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.217733 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.217740 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.217746 | orchestrator | 2026-03-03 01:16:23.217753 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-03 01:16:23.217759 | orchestrator | Tuesday 03 March 2026 01:12:50 +0000 (0:00:00.309) 0:05:02.650 ********* 2026-03-03 01:16:23.217766 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.217773 | orchestrator | 2026-03-03 01:16:23.217780 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-03 01:16:23.217787 | orchestrator | Tuesday 03 March 2026 01:12:50 +0000 (0:00:00.124) 0:05:02.775 ********* 2026-03-03 01:16:23.217794 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.217800 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.217806 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.217813 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.217820 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.217827 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.217833 | orchestrator | 2026-03-03 01:16:23.217840 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-03 01:16:23.217846 | orchestrator | Tuesday 03 March 2026 01:12:50 +0000 (0:00:00.502) 0:05:03.278 ********* 2026-03-03 01:16:23.217853 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-03 01:16:23.217860 | orchestrator | 2026-03-03 01:16:23.217866 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-03 01:16:23.217873 | orchestrator | Tuesday 03 March 2026 01:12:51 +0000 (0:00:00.801) 0:05:04.080 ********* 2026-03-03 01:16:23.217880 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.217887 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.217893 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.217900 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.217907 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.217913 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.217920 | orchestrator | 2026-03-03 01:16:23.217927 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-03 01:16:23.217933 | orchestrator | Tuesday 03 March 2026 01:12:52 +0000 (0:00:00.490) 0:05:04.570 ********* 2026-03-03 01:16:23.217952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.217967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.217975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.217982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.217989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.217996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218166 | orchestrator | 2026-03-03 01:16:23.218174 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-03 01:16:23.218181 | orchestrator | Tuesday 03 March 2026 01:12:55 +0000 (0:00:03.755) 0:05:08.325 ********* 2026-03-03 01:16:23.218188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.218196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.218204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.218228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.218236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.218243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.218250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.218337 | orchestrator | 2026-03-03 01:16:23.218343 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-03 01:16:23.218351 | orchestrator | Tuesday 03 March 2026 01:13:01 +0000 (0:00:05.588) 0:05:13.914 ********* 2026-03-03 01:16:23.218358 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.218366 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.218374 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.218381 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.218392 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.218398 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.218405 | orchestrator | 2026-03-03 01:16:23.218413 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-03 01:16:23.218422 | orchestrator | Tuesday 03 March 2026 01:13:04 +0000 (0:00:02.772) 0:05:16.687 ********* 2026-03-03 01:16:23.218429 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-03 01:16:23.218436 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-03 01:16:23.218484 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-03 01:16:23.218493 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-03 01:16:23.218501 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-03 01:16:23.218508 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-03 01:16:23.218514 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-03 01:16:23.218521 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.218527 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-03 01:16:23.218534 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.218540 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-03 01:16:23.218547 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.218554 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-03 01:16:23.218562 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-03 01:16:23.218569 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-03 01:16:23.218575 | orchestrator | 2026-03-03 01:16:23.218581 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-03 01:16:23.218589 | orchestrator | Tuesday 03 March 2026 01:13:08 +0000 (0:00:04.336) 0:05:21.023 ********* 2026-03-03 01:16:23.218596 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.218603 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.218610 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.218617 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.218624 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.218631 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.218638 | orchestrator | 2026-03-03 01:16:23.218646 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-03 01:16:23.218653 | orchestrator | Tuesday 03 March 2026 01:13:09 +0000 (0:00:00.514) 0:05:21.537 ********* 2026-03-03 01:16:23.218660 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-03 01:16:23.218668 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-03 01:16:23.218680 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-03 01:16:23.218687 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-03 01:16:23.218694 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-03 01:16:23.218702 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-03 01:16:23.218709 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-03 01:16:23.218715 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-03 01:16:23.218722 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-03 01:16:23.218730 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-03 01:16:23.218737 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.218744 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-03 01:16:23.218751 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.218758 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-03 01:16:23.218766 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.218773 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-03 01:16:23.218780 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-03 01:16:23.218787 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-03 01:16:23.218795 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-03 01:16:23.218807 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-03 01:16:23.218819 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-03 01:16:23.218825 | orchestrator | 2026-03-03 01:16:23.218832 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-03 01:16:23.218838 | orchestrator | Tuesday 03 March 2026 01:13:14 +0000 (0:00:05.114) 0:05:26.652 ********* 2026-03-03 01:16:23.218845 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-03 01:16:23.218853 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-03 01:16:23.218859 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-03 01:16:23.218867 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-03 01:16:23.218873 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-03 01:16:23.218881 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-03 01:16:23.218888 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-03 01:16:23.218895 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-03 01:16:23.218903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-03 01:16:23.218909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-03 01:16:23.218929 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-03 01:16:23.218936 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-03 01:16:23.218943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-03 01:16:23.218950 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.218956 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-03 01:16:23.218963 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-03 01:16:23.218969 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-03 01:16:23.218976 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.218983 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-03 01:16:23.218989 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-03 01:16:23.218996 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219004 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-03 01:16:23.219011 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-03 01:16:23.219017 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-03 01:16:23.219024 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-03 01:16:23.219031 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-03 01:16:23.219039 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-03 01:16:23.219045 | orchestrator | 2026-03-03 01:16:23.219051 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-03 01:16:23.219058 | orchestrator | Tuesday 03 March 2026 01:13:21 +0000 (0:00:07.645) 0:05:34.297 ********* 2026-03-03 01:16:23.219065 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.219072 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.219078 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.219086 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219093 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.219099 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.219106 | orchestrator | 2026-03-03 01:16:23.219113 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-03 01:16:23.219121 | orchestrator | Tuesday 03 March 2026 01:13:22 +0000 (0:00:00.823) 0:05:35.121 ********* 2026-03-03 01:16:23.219128 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.219136 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.219142 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.219150 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219157 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.219164 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.219171 | orchestrator | 2026-03-03 01:16:23.219178 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-03 01:16:23.219185 | orchestrator | Tuesday 03 March 2026 01:13:23 +0000 (0:00:00.836) 0:05:35.958 ********* 2026-03-03 01:16:23.219192 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219199 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.219207 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.219214 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.219221 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.219228 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.219236 | orchestrator | 2026-03-03 01:16:23.219243 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-03 01:16:23.219249 | orchestrator | Tuesday 03 March 2026 01:13:26 +0000 (0:00:03.073) 0:05:39.031 ********* 2026-03-03 01:16:23.219269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.219304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.219313 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.219321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.219328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.219335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.219343 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.219360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.219373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.219380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.219387 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.219394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.219401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.219408 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-03 01:16:23.219434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-03 01:16:23.219468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.219476 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.219483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-03 01:16:23.219489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-03 01:16:23.219496 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.219503 | orchestrator | 2026-03-03 01:16:23.219510 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-03 01:16:23.219517 | orchestrator | Tuesday 03 March 2026 01:13:28 +0000 (0:00:02.329) 0:05:41.361 ********* 2026-03-03 01:16:23.219523 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-03 01:16:23.219530 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-03 01:16:23.219537 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.219543 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-03 01:16:23.219550 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-03 01:16:23.219557 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.219564 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-03 01:16:23.219572 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-03 01:16:23.219578 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.219585 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-03 01:16:23.219592 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-03 01:16:23.219610 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219617 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-03 01:16:23.219624 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-03 01:16:23.219631 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.219638 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-03 01:16:23.219644 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-03 01:16:23.219651 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.219658 | orchestrator | 2026-03-03 01:16:23.219665 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-03 01:16:23.219672 | orchestrator | Tuesday 03 March 2026 01:13:30 +0000 (0:00:01.177) 0:05:42.538 ********* 2026-03-03 01:16:23.219689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-03 01:16:23.219828 | orchestrator | 2026-03-03 01:16:23.219834 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-03 01:16:23.219841 | orchestrator | Tuesday 03 March 2026 01:13:32 +0000 (0:00:02.868) 0:05:45.406 ********* 2026-03-03 01:16:23.219847 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.219854 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.219860 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.219867 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.219873 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.219879 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.219886 | orchestrator | 2026-03-03 01:16:23.219892 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-03 01:16:23.219898 | orchestrator | Tuesday 03 March 2026 01:13:33 +0000 (0:00:00.653) 0:05:46.059 ********* 2026-03-03 01:16:23.219908 | orchestrator | 2026-03-03 01:16:23.219915 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-03 01:16:23.219921 | orchestrator | Tuesday 03 March 2026 01:13:33 +0000 (0:00:00.159) 0:05:46.218 ********* 2026-03-03 01:16:23.219927 | orchestrator | 2026-03-03 01:16:23.219934 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-03 01:16:23.219941 | orchestrator | Tuesday 03 March 2026 01:13:33 +0000 (0:00:00.121) 0:05:46.339 ********* 2026-03-03 01:16:23.219947 | orchestrator | 2026-03-03 01:16:23.219954 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-03 01:16:23.219961 | orchestrator | Tuesday 03 March 2026 01:13:33 +0000 (0:00:00.122) 0:05:46.462 ********* 2026-03-03 01:16:23.219967 | orchestrator | 2026-03-03 01:16:23.219974 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-03 01:16:23.219980 | orchestrator | Tuesday 03 March 2026 01:13:34 +0000 (0:00:00.256) 0:05:46.719 ********* 2026-03-03 01:16:23.219986 | orchestrator | 2026-03-03 01:16:23.219993 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-03 01:16:23.220000 | orchestrator | Tuesday 03 March 2026 01:13:34 +0000 (0:00:00.123) 0:05:46.843 ********* 2026-03-03 01:16:23.220006 | orchestrator | 2026-03-03 01:16:23.220013 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-03 01:16:23.220019 | orchestrator | Tuesday 03 March 2026 01:13:34 +0000 (0:00:00.119) 0:05:46.962 ********* 2026-03-03 01:16:23.220026 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.220032 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.220039 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.220045 | orchestrator | 2026-03-03 01:16:23.220051 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-03 01:16:23.220058 | orchestrator | Tuesday 03 March 2026 01:13:41 +0000 (0:00:06.660) 0:05:53.622 ********* 2026-03-03 01:16:23.220065 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.220072 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.220078 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.220084 | orchestrator | 2026-03-03 01:16:23.220091 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-03 01:16:23.220097 | orchestrator | Tuesday 03 March 2026 01:13:53 +0000 (0:00:12.004) 0:06:05.627 ********* 2026-03-03 01:16:23.220104 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.220110 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.220117 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.220123 | orchestrator | 2026-03-03 01:16:23.220130 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-03 01:16:23.220136 | orchestrator | Tuesday 03 March 2026 01:14:14 +0000 (0:00:21.771) 0:06:27.399 ********* 2026-03-03 01:16:23.220142 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.220149 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.220155 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.220162 | orchestrator | 2026-03-03 01:16:23.220168 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-03 01:16:23.220175 | orchestrator | Tuesday 03 March 2026 01:14:39 +0000 (0:00:24.613) 0:06:52.012 ********* 2026-03-03 01:16:23.220186 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-03 01:16:23.220194 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-03 01:16:23.220204 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-03 01:16:23.220210 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.220217 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.220223 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.220229 | orchestrator | 2026-03-03 01:16:23.220236 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-03 01:16:23.220248 | orchestrator | Tuesday 03 March 2026 01:14:46 +0000 (0:00:07.169) 0:06:59.182 ********* 2026-03-03 01:16:23.220255 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.220261 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.220267 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.220274 | orchestrator | 2026-03-03 01:16:23.220281 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-03 01:16:23.220288 | orchestrator | Tuesday 03 March 2026 01:14:47 +0000 (0:00:00.711) 0:06:59.894 ********* 2026-03-03 01:16:23.220295 | orchestrator | changed: [testbed-node-3] 2026-03-03 01:16:23.220301 | orchestrator | changed: [testbed-node-5] 2026-03-03 01:16:23.220308 | orchestrator | changed: [testbed-node-4] 2026-03-03 01:16:23.220314 | orchestrator | 2026-03-03 01:16:23.220321 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-03 01:16:23.220328 | orchestrator | Tuesday 03 March 2026 01:15:11 +0000 (0:00:23.707) 0:07:23.601 ********* 2026-03-03 01:16:23.220334 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.220341 | orchestrator | 2026-03-03 01:16:23.220348 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-03 01:16:23.220354 | orchestrator | Tuesday 03 March 2026 01:15:11 +0000 (0:00:00.120) 0:07:23.722 ********* 2026-03-03 01:16:23.220361 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.220368 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.220375 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.220381 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.220388 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.220394 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-03 01:16:23.220401 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:16:23.220408 | orchestrator | 2026-03-03 01:16:23.220415 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-03 01:16:23.220422 | orchestrator | Tuesday 03 March 2026 01:15:33 +0000 (0:00:22.195) 0:07:45.917 ********* 2026-03-03 01:16:23.220428 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.220503 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.220510 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.220517 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.220523 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.220530 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.220536 | orchestrator | 2026-03-03 01:16:23.220543 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-03 01:16:23.220550 | orchestrator | Tuesday 03 March 2026 01:15:41 +0000 (0:00:07.941) 0:07:53.859 ********* 2026-03-03 01:16:23.220556 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.220563 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.220569 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.220576 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.220582 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.220589 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-03 01:16:23.220596 | orchestrator | 2026-03-03 01:16:23.220603 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-03 01:16:23.220609 | orchestrator | Tuesday 03 March 2026 01:15:44 +0000 (0:00:03.504) 0:07:57.363 ********* 2026-03-03 01:16:23.220616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:16:23.220623 | orchestrator | 2026-03-03 01:16:23.220629 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-03 01:16:23.220635 | orchestrator | Tuesday 03 March 2026 01:15:57 +0000 (0:00:12.498) 0:08:09.861 ********* 2026-03-03 01:16:23.220642 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:16:23.220649 | orchestrator | 2026-03-03 01:16:23.220655 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-03 01:16:23.220668 | orchestrator | Tuesday 03 March 2026 01:15:58 +0000 (0:00:01.398) 0:08:11.259 ********* 2026-03-03 01:16:23.220675 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.220681 | orchestrator | 2026-03-03 01:16:23.220688 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-03 01:16:23.220694 | orchestrator | Tuesday 03 March 2026 01:16:00 +0000 (0:00:01.428) 0:08:12.688 ********* 2026-03-03 01:16:23.220701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-03 01:16:23.220707 | orchestrator | 2026-03-03 01:16:23.220714 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-03 01:16:23.220720 | orchestrator | Tuesday 03 March 2026 01:16:13 +0000 (0:00:13.441) 0:08:26.129 ********* 2026-03-03 01:16:23.220727 | orchestrator | ok: [testbed-node-3] 2026-03-03 01:16:23.220734 | orchestrator | ok: [testbed-node-4] 2026-03-03 01:16:23.220740 | orchestrator | ok: [testbed-node-5] 2026-03-03 01:16:23.220747 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:16:23.220754 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:16:23.220760 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:16:23.220766 | orchestrator | 2026-03-03 01:16:23.220773 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-03 01:16:23.220780 | orchestrator | 2026-03-03 01:16:23.220787 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-03 01:16:23.220794 | orchestrator | Tuesday 03 March 2026 01:16:15 +0000 (0:00:01.908) 0:08:28.038 ********* 2026-03-03 01:16:23.220801 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:16:23.220812 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:16:23.220818 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:16:23.220824 | orchestrator | 2026-03-03 01:16:23.220831 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-03 01:16:23.220838 | orchestrator | 2026-03-03 01:16:23.220849 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-03 01:16:23.220856 | orchestrator | Tuesday 03 March 2026 01:16:16 +0000 (0:00:01.212) 0:08:29.250 ********* 2026-03-03 01:16:23.220862 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.220869 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.220876 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.220882 | orchestrator | 2026-03-03 01:16:23.220889 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-03 01:16:23.220895 | orchestrator | 2026-03-03 01:16:23.220902 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-03 01:16:23.220908 | orchestrator | Tuesday 03 March 2026 01:16:17 +0000 (0:00:00.509) 0:08:29.760 ********* 2026-03-03 01:16:23.220915 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-03 01:16:23.220922 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-03 01:16:23.220929 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-03 01:16:23.220937 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-03 01:16:23.220944 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-03 01:16:23.220951 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-03 01:16:23.220958 | orchestrator | skipping: [testbed-node-3] 2026-03-03 01:16:23.220966 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-03 01:16:23.220972 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-03 01:16:23.220979 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-03 01:16:23.220985 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-03 01:16:23.220992 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-03 01:16:23.221000 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-03 01:16:23.221006 | orchestrator | skipping: [testbed-node-4] 2026-03-03 01:16:23.221013 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-03 01:16:23.221025 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-03 01:16:23.221031 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-03 01:16:23.221038 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-03 01:16:23.221044 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-03 01:16:23.221051 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-03 01:16:23.221058 | orchestrator | skipping: [testbed-node-5] 2026-03-03 01:16:23.221065 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-03 01:16:23.221071 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-03 01:16:23.221078 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-03 01:16:23.221084 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-03 01:16:23.221091 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-03 01:16:23.221098 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-03 01:16:23.221104 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-03 01:16:23.221111 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-03 01:16:23.221118 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-03 01:16:23.221124 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-03 01:16:23.221131 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-03 01:16:23.221138 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-03 01:16:23.221145 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.221152 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.221158 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-03 01:16:23.221165 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-03 01:16:23.221172 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-03 01:16:23.221178 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-03 01:16:23.221184 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-03 01:16:23.221191 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-03 01:16:23.221198 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.221205 | orchestrator | 2026-03-03 01:16:23.221212 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-03 01:16:23.221218 | orchestrator | 2026-03-03 01:16:23.221225 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-03 01:16:23.221233 | orchestrator | Tuesday 03 March 2026 01:16:18 +0000 (0:00:01.270) 0:08:31.030 ********* 2026-03-03 01:16:23.221240 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-03 01:16:23.221247 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-03 01:16:23.221253 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.221260 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-03 01:16:23.221267 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-03 01:16:23.221274 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.221281 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-03 01:16:23.221288 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-03 01:16:23.221295 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.221302 | orchestrator | 2026-03-03 01:16:23.221309 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-03 01:16:23.221316 | orchestrator | 2026-03-03 01:16:23.221327 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-03 01:16:23.221335 | orchestrator | Tuesday 03 March 2026 01:16:19 +0000 (0:00:00.779) 0:08:31.810 ********* 2026-03-03 01:16:23.221342 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.221348 | orchestrator | 2026-03-03 01:16:23.221367 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-03 01:16:23.221374 | orchestrator | 2026-03-03 01:16:23.221382 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-03 01:16:23.221389 | orchestrator | Tuesday 03 March 2026 01:16:19 +0000 (0:00:00.688) 0:08:32.498 ********* 2026-03-03 01:16:23.221395 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:16:23.221403 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:16:23.221409 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:16:23.221415 | orchestrator | 2026-03-03 01:16:23.221422 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:16:23.221430 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:16:23.221440 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-03 01:16:23.221464 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-03 01:16:23.221471 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-03 01:16:23.221478 | orchestrator | testbed-node-3 : ok=45  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-03 01:16:23.221484 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-03 01:16:23.221490 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-03 01:16:23.221496 | orchestrator | 2026-03-03 01:16:23.221502 | orchestrator | 2026-03-03 01:16:23.221509 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:16:23.221515 | orchestrator | Tuesday 03 March 2026 01:16:20 +0000 (0:00:00.590) 0:08:33.088 ********* 2026-03-03 01:16:23.221521 | orchestrator | =============================================================================== 2026-03-03 01:16:23.221527 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.56s 2026-03-03 01:16:23.221533 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.61s 2026-03-03 01:16:23.221539 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.71s 2026-03-03 01:16:23.221545 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.20s 2026-03-03 01:16:23.221551 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.77s 2026-03-03 01:16:23.221557 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.72s 2026-03-03 01:16:23.221563 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.90s 2026-03-03 01:16:23.221570 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.67s 2026-03-03 01:16:23.221576 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.44s 2026-03-03 01:16:23.221582 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.22s 2026-03-03 01:16:23.221588 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.21s 2026-03-03 01:16:23.221594 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.69s 2026-03-03 01:16:23.221600 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.50s 2026-03-03 01:16:23.221606 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.35s 2026-03-03 01:16:23.221613 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.00s 2026-03-03 01:16:23.221620 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.43s 2026-03-03 01:16:23.221632 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.09s 2026-03-03 01:16:23.221638 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.44s 2026-03-03 01:16:23.221644 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.94s 2026-03-03 01:16:23.221650 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.65s 2026-03-03 01:16:23.221657 | orchestrator | 2026-03-03 01:16:23 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:23.221664 | orchestrator | 2026-03-03 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:26.251160 | orchestrator | 2026-03-03 01:16:26 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:26.251227 | orchestrator | 2026-03-03 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:29.297347 | orchestrator | 2026-03-03 01:16:29 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:29.297406 | orchestrator | 2026-03-03 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:32.351168 | orchestrator | 2026-03-03 01:16:32 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:32.351249 | orchestrator | 2026-03-03 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:35.398646 | orchestrator | 2026-03-03 01:16:35 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:35.398747 | orchestrator | 2026-03-03 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:38.443606 | orchestrator | 2026-03-03 01:16:38 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:38.445535 | orchestrator | 2026-03-03 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:41.489154 | orchestrator | 2026-03-03 01:16:41 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:41.489227 | orchestrator | 2026-03-03 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:44.534235 | orchestrator | 2026-03-03 01:16:44 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:44.534338 | orchestrator | 2026-03-03 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:47.572257 | orchestrator | 2026-03-03 01:16:47 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:47.572375 | orchestrator | 2026-03-03 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:50.613126 | orchestrator | 2026-03-03 01:16:50 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:50.613208 | orchestrator | 2026-03-03 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:53.654913 | orchestrator | 2026-03-03 01:16:53 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:53.654986 | orchestrator | 2026-03-03 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:56.702634 | orchestrator | 2026-03-03 01:16:56 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:56.702722 | orchestrator | 2026-03-03 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:16:59.757569 | orchestrator | 2026-03-03 01:16:59 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:16:59.757635 | orchestrator | 2026-03-03 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:02.799114 | orchestrator | 2026-03-03 01:17:02 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:02.799213 | orchestrator | 2026-03-03 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:05.842045 | orchestrator | 2026-03-03 01:17:05 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:05.842157 | orchestrator | 2026-03-03 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:08.887700 | orchestrator | 2026-03-03 01:17:08 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:08.887789 | orchestrator | 2026-03-03 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:11.939693 | orchestrator | 2026-03-03 01:17:11 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:11.940676 | orchestrator | 2026-03-03 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:14.980493 | orchestrator | 2026-03-03 01:17:14 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:14.980590 | orchestrator | 2026-03-03 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:18.027773 | orchestrator | 2026-03-03 01:17:18 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:18.028899 | orchestrator | 2026-03-03 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:21.071894 | orchestrator | 2026-03-03 01:17:21 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:21.071999 | orchestrator | 2026-03-03 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:24.133105 | orchestrator | 2026-03-03 01:17:24 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:24.133172 | orchestrator | 2026-03-03 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:27.175241 | orchestrator | 2026-03-03 01:17:27 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:27.175328 | orchestrator | 2026-03-03 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:30.213871 | orchestrator | 2026-03-03 01:17:30 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:30.213930 | orchestrator | 2026-03-03 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:33.256220 | orchestrator | 2026-03-03 01:17:33 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:33.257672 | orchestrator | 2026-03-03 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:36.301660 | orchestrator | 2026-03-03 01:17:36 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:36.301736 | orchestrator | 2026-03-03 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:39.351925 | orchestrator | 2026-03-03 01:17:39 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:39.352000 | orchestrator | 2026-03-03 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:42.395119 | orchestrator | 2026-03-03 01:17:42 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:42.395204 | orchestrator | 2026-03-03 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:45.447072 | orchestrator | 2026-03-03 01:17:45 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:45.447163 | orchestrator | 2026-03-03 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:48.493259 | orchestrator | 2026-03-03 01:17:48 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:48.493356 | orchestrator | 2026-03-03 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:51.539758 | orchestrator | 2026-03-03 01:17:51 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:51.539839 | orchestrator | 2026-03-03 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:54.587719 | orchestrator | 2026-03-03 01:17:54 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:54.587793 | orchestrator | 2026-03-03 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:17:57.630002 | orchestrator | 2026-03-03 01:17:57 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:17:57.630110 | orchestrator | 2026-03-03 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:00.685752 | orchestrator | 2026-03-03 01:18:00 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:00.685851 | orchestrator | 2026-03-03 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:03.732985 | orchestrator | 2026-03-03 01:18:03 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:03.733070 | orchestrator | 2026-03-03 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:06.776409 | orchestrator | 2026-03-03 01:18:06 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:06.776608 | orchestrator | 2026-03-03 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:09.818255 | orchestrator | 2026-03-03 01:18:09 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:09.818334 | orchestrator | 2026-03-03 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:12.859157 | orchestrator | 2026-03-03 01:18:12 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:12.859257 | orchestrator | 2026-03-03 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:15.904848 | orchestrator | 2026-03-03 01:18:15 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:15.904946 | orchestrator | 2026-03-03 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:18.954951 | orchestrator | 2026-03-03 01:18:18 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:18.955043 | orchestrator | 2026-03-03 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:22.011807 | orchestrator | 2026-03-03 01:18:22 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:22.011904 | orchestrator | 2026-03-03 01:18:22 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:25.062129 | orchestrator | 2026-03-03 01:18:25 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:25.062213 | orchestrator | 2026-03-03 01:18:25 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:28.109047 | orchestrator | 2026-03-03 01:18:28 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:28.109125 | orchestrator | 2026-03-03 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:31.152406 | orchestrator | 2026-03-03 01:18:31 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:31.152504 | orchestrator | 2026-03-03 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:34.196963 | orchestrator | 2026-03-03 01:18:34 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:34.197018 | orchestrator | 2026-03-03 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:37.250148 | orchestrator | 2026-03-03 01:18:37 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:37.250621 | orchestrator | 2026-03-03 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:40.292883 | orchestrator | 2026-03-03 01:18:40 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:40.293343 | orchestrator | 2026-03-03 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:43.335998 | orchestrator | 2026-03-03 01:18:43 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:43.336055 | orchestrator | 2026-03-03 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:46.381293 | orchestrator | 2026-03-03 01:18:46 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:46.381364 | orchestrator | 2026-03-03 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:49.428359 | orchestrator | 2026-03-03 01:18:49 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:49.428514 | orchestrator | 2026-03-03 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:52.477893 | orchestrator | 2026-03-03 01:18:52 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:52.477989 | orchestrator | 2026-03-03 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:55.524073 | orchestrator | 2026-03-03 01:18:55 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:55.524166 | orchestrator | 2026-03-03 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:18:58.569710 | orchestrator | 2026-03-03 01:18:58 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:18:58.569792 | orchestrator | 2026-03-03 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:19:01.620313 | orchestrator | 2026-03-03 01:19:01 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:19:01.620365 | orchestrator | 2026-03-03 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:19:04.675790 | orchestrator | 2026-03-03 01:19:04 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:19:04.675878 | orchestrator | 2026-03-03 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:19:07.723184 | orchestrator | 2026-03-03 01:19:07 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state STARTED 2026-03-03 01:19:07.724830 | orchestrator | 2026-03-03 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-03-03 01:19:10.762841 | orchestrator | 2026-03-03 01:19:10 | INFO  | Task 5e1e8ee3-2ac7-40c9-b674-4036f4986f67 is in state SUCCESS 2026-03-03 01:19:10.764426 | orchestrator | 2026-03-03 01:19:10.764474 | orchestrator | 2026-03-03 01:19:10.764482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-03 01:19:10.764489 | orchestrator | 2026-03-03 01:19:10.764496 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-03 01:19:10.764503 | orchestrator | Tuesday 03 March 2026 01:14:25 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-03 01:19:10.764510 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.764517 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:19:10.764524 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:19:10.764566 | orchestrator | 2026-03-03 01:19:10.764575 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-03 01:19:10.764640 | orchestrator | Tuesday 03 March 2026 01:14:25 +0000 (0:00:00.282) 0:00:00.590 ********* 2026-03-03 01:19:10.764727 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-03 01:19:10.764736 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-03 01:19:10.764742 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-03 01:19:10.764748 | orchestrator | 2026-03-03 01:19:10.764755 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-03 01:19:10.764761 | orchestrator | 2026-03-03 01:19:10.764768 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-03 01:19:10.764774 | orchestrator | Tuesday 03 March 2026 01:14:26 +0000 (0:00:00.387) 0:00:00.978 ********* 2026-03-03 01:19:10.764811 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:19:10.764820 | orchestrator | 2026-03-03 01:19:10.764826 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-03 01:19:10.764832 | orchestrator | Tuesday 03 March 2026 01:14:26 +0000 (0:00:00.557) 0:00:01.535 ********* 2026-03-03 01:19:10.764839 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-03 01:19:10.764845 | orchestrator | 2026-03-03 01:19:10.764852 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-03 01:19:10.764858 | orchestrator | Tuesday 03 March 2026 01:14:29 +0000 (0:00:03.123) 0:00:04.659 ********* 2026-03-03 01:19:10.764864 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-03 01:19:10.764870 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-03 01:19:10.764877 | orchestrator | 2026-03-03 01:19:10.764883 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-03 01:19:10.764889 | orchestrator | Tuesday 03 March 2026 01:14:35 +0000 (0:00:05.624) 0:00:10.284 ********* 2026-03-03 01:19:10.764895 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-03 01:19:10.764938 | orchestrator | 2026-03-03 01:19:10.764944 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-03 01:19:10.764951 | orchestrator | Tuesday 03 March 2026 01:14:38 +0000 (0:00:03.233) 0:00:13.518 ********* 2026-03-03 01:19:10.765127 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-03 01:19:10.765180 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-03 01:19:10.765380 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-03 01:19:10.765391 | orchestrator | 2026-03-03 01:19:10.765397 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-03 01:19:10.765404 | orchestrator | Tuesday 03 March 2026 01:14:46 +0000 (0:00:07.743) 0:00:21.261 ********* 2026-03-03 01:19:10.765426 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-03 01:19:10.765432 | orchestrator | 2026-03-03 01:19:10.765439 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-03 01:19:10.765445 | orchestrator | Tuesday 03 March 2026 01:14:49 +0000 (0:00:03.098) 0:00:24.360 ********* 2026-03-03 01:19:10.765451 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-03 01:19:10.765458 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-03 01:19:10.765464 | orchestrator | 2026-03-03 01:19:10.765470 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-03 01:19:10.765476 | orchestrator | Tuesday 03 March 2026 01:14:57 +0000 (0:00:07.598) 0:00:31.959 ********* 2026-03-03 01:19:10.765482 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-03 01:19:10.765489 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-03 01:19:10.765495 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-03 01:19:10.765501 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-03 01:19:10.765517 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-03 01:19:10.765523 | orchestrator | 2026-03-03 01:19:10.765529 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-03 01:19:10.765535 | orchestrator | Tuesday 03 March 2026 01:15:12 +0000 (0:00:15.164) 0:00:47.123 ********* 2026-03-03 01:19:10.765542 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:19:10.765548 | orchestrator | 2026-03-03 01:19:10.765554 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-03 01:19:10.765561 | orchestrator | Tuesday 03 March 2026 01:15:13 +0000 (0:00:01.021) 0:00:48.144 ********* 2026-03-03 01:19:10.765567 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765573 | orchestrator | 2026-03-03 01:19:10.765579 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-03 01:19:10.765585 | orchestrator | Tuesday 03 March 2026 01:15:18 +0000 (0:00:05.251) 0:00:53.396 ********* 2026-03-03 01:19:10.765592 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765598 | orchestrator | 2026-03-03 01:19:10.765604 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-03 01:19:10.765639 | orchestrator | Tuesday 03 March 2026 01:15:22 +0000 (0:00:04.058) 0:00:57.454 ********* 2026-03-03 01:19:10.765647 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.765653 | orchestrator | 2026-03-03 01:19:10.765660 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-03 01:19:10.765666 | orchestrator | Tuesday 03 March 2026 01:15:25 +0000 (0:00:02.994) 0:01:00.448 ********* 2026-03-03 01:19:10.765672 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-03 01:19:10.765678 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-03 01:19:10.765685 | orchestrator | 2026-03-03 01:19:10.765691 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-03 01:19:10.765697 | orchestrator | Tuesday 03 March 2026 01:15:36 +0000 (0:00:10.853) 0:01:11.301 ********* 2026-03-03 01:19:10.765703 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-03 01:19:10.765710 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-03 01:19:10.765719 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-03 01:19:10.765733 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-03 01:19:10.765739 | orchestrator | 2026-03-03 01:19:10.765746 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-03 01:19:10.765752 | orchestrator | Tuesday 03 March 2026 01:15:52 +0000 (0:00:15.508) 0:01:26.810 ********* 2026-03-03 01:19:10.765758 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765764 | orchestrator | 2026-03-03 01:19:10.765771 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-03 01:19:10.765777 | orchestrator | Tuesday 03 March 2026 01:15:56 +0000 (0:00:03.967) 0:01:30.777 ********* 2026-03-03 01:19:10.765784 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765790 | orchestrator | 2026-03-03 01:19:10.765796 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-03 01:19:10.765802 | orchestrator | Tuesday 03 March 2026 01:16:00 +0000 (0:00:04.702) 0:01:35.480 ********* 2026-03-03 01:19:10.765808 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.765815 | orchestrator | 2026-03-03 01:19:10.765836 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-03 01:19:10.765842 | orchestrator | Tuesday 03 March 2026 01:16:01 +0000 (0:00:00.206) 0:01:35.686 ********* 2026-03-03 01:19:10.765848 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.765860 | orchestrator | 2026-03-03 01:19:10.765867 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-03 01:19:10.765873 | orchestrator | Tuesday 03 March 2026 01:16:04 +0000 (0:00:03.523) 0:01:39.210 ********* 2026-03-03 01:19:10.765879 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:19:10.765886 | orchestrator | 2026-03-03 01:19:10.765892 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-03 01:19:10.765898 | orchestrator | Tuesday 03 March 2026 01:16:05 +0000 (0:00:00.948) 0:01:40.158 ********* 2026-03-03 01:19:10.765904 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.765911 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.765917 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765923 | orchestrator | 2026-03-03 01:19:10.765929 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-03 01:19:10.765935 | orchestrator | Tuesday 03 March 2026 01:16:11 +0000 (0:00:05.855) 0:01:46.014 ********* 2026-03-03 01:19:10.765942 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.765948 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.765954 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765960 | orchestrator | 2026-03-03 01:19:10.765966 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-03 01:19:10.765972 | orchestrator | Tuesday 03 March 2026 01:16:15 +0000 (0:00:04.538) 0:01:50.552 ********* 2026-03-03 01:19:10.765978 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.765985 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.765991 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.765997 | orchestrator | 2026-03-03 01:19:10.766005 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-03 01:19:10.766054 | orchestrator | Tuesday 03 March 2026 01:16:16 +0000 (0:00:00.837) 0:01:51.389 ********* 2026-03-03 01:19:10.766064 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:19:10.766071 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766079 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:19:10.766086 | orchestrator | 2026-03-03 01:19:10.766094 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-03 01:19:10.766101 | orchestrator | Tuesday 03 March 2026 01:16:18 +0000 (0:00:02.172) 0:01:53.562 ********* 2026-03-03 01:19:10.766109 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.766116 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.766123 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.766130 | orchestrator | 2026-03-03 01:19:10.766137 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-03 01:19:10.766144 | orchestrator | Tuesday 03 March 2026 01:16:20 +0000 (0:00:01.339) 0:01:54.902 ********* 2026-03-03 01:19:10.766152 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.766159 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.766166 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.766173 | orchestrator | 2026-03-03 01:19:10.766180 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-03 01:19:10.766187 | orchestrator | Tuesday 03 March 2026 01:16:21 +0000 (0:00:01.210) 0:01:56.112 ********* 2026-03-03 01:19:10.766194 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.766201 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.766208 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.766216 | orchestrator | 2026-03-03 01:19:10.766245 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-03 01:19:10.766254 | orchestrator | Tuesday 03 March 2026 01:16:23 +0000 (0:00:02.087) 0:01:58.199 ********* 2026-03-03 01:19:10.766261 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.766268 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.766275 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.766282 | orchestrator | 2026-03-03 01:19:10.766289 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-03 01:19:10.766302 | orchestrator | Tuesday 03 March 2026 01:16:25 +0000 (0:00:01.991) 0:02:00.191 ********* 2026-03-03 01:19:10.766310 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766317 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:19:10.766323 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:19:10.766329 | orchestrator | 2026-03-03 01:19:10.766335 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-03 01:19:10.766342 | orchestrator | Tuesday 03 March 2026 01:16:26 +0000 (0:00:00.654) 0:02:00.845 ********* 2026-03-03 01:19:10.766348 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:19:10.766354 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:19:10.766360 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766366 | orchestrator | 2026-03-03 01:19:10.766372 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-03 01:19:10.766379 | orchestrator | Tuesday 03 March 2026 01:16:29 +0000 (0:00:03.820) 0:02:04.666 ********* 2026-03-03 01:19:10.766389 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:19:10.766396 | orchestrator | 2026-03-03 01:19:10.766402 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-03 01:19:10.766462 | orchestrator | Tuesday 03 March 2026 01:16:30 +0000 (0:00:00.669) 0:02:05.335 ********* 2026-03-03 01:19:10.766469 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766475 | orchestrator | 2026-03-03 01:19:10.766481 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-03 01:19:10.766488 | orchestrator | Tuesday 03 March 2026 01:16:35 +0000 (0:00:04.609) 0:02:09.945 ********* 2026-03-03 01:19:10.766494 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766500 | orchestrator | 2026-03-03 01:19:10.766506 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-03 01:19:10.766512 | orchestrator | Tuesday 03 March 2026 01:16:38 +0000 (0:00:03.416) 0:02:13.361 ********* 2026-03-03 01:19:10.766519 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-03 01:19:10.766525 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-03 01:19:10.766531 | orchestrator | 2026-03-03 01:19:10.766538 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-03 01:19:10.766544 | orchestrator | Tuesday 03 March 2026 01:16:46 +0000 (0:00:07.709) 0:02:21.071 ********* 2026-03-03 01:19:10.766550 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766556 | orchestrator | 2026-03-03 01:19:10.766562 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-03 01:19:10.766568 | orchestrator | Tuesday 03 March 2026 01:16:50 +0000 (0:00:03.686) 0:02:24.758 ********* 2026-03-03 01:19:10.766575 | orchestrator | ok: [testbed-node-0] 2026-03-03 01:19:10.766581 | orchestrator | ok: [testbed-node-1] 2026-03-03 01:19:10.766587 | orchestrator | ok: [testbed-node-2] 2026-03-03 01:19:10.766593 | orchestrator | 2026-03-03 01:19:10.766600 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-03 01:19:10.766606 | orchestrator | Tuesday 03 March 2026 01:16:50 +0000 (0:00:00.343) 0:02:25.101 ********* 2026-03-03 01:19:10.766615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.766653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.766666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.766674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.766682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.766689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.766696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.766792 | orchestrator | 2026-03-03 01:19:10.766799 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-03 01:19:10.766805 | orchestrator | Tuesday 03 March 2026 01:16:52 +0000 (0:00:02.558) 0:02:27.660 ********* 2026-03-03 01:19:10.766812 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.766818 | orchestrator | 2026-03-03 01:19:10.766841 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-03 01:19:10.766848 | orchestrator | Tuesday 03 March 2026 01:16:53 +0000 (0:00:00.152) 0:02:27.812 ********* 2026-03-03 01:19:10.766855 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.766861 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:19:10.766867 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:19:10.766873 | orchestrator | 2026-03-03 01:19:10.766880 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-03 01:19:10.766886 | orchestrator | Tuesday 03 March 2026 01:16:53 +0000 (0:00:00.452) 0:02:28.264 ********* 2026-03-03 01:19:10.766896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.766903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.766910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.766923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.766930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.766936 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.766962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.766973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.766979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.766986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.766999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767005 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:19:10.767012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767077 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:19:10.767084 | orchestrator | 2026-03-03 01:19:10.767090 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-03 01:19:10.767096 | orchestrator | Tuesday 03 March 2026 01:16:54 +0000 (0:00:00.706) 0:02:28.971 ********* 2026-03-03 01:19:10.767103 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-03 01:19:10.767109 | orchestrator | 2026-03-03 01:19:10.767115 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-03 01:19:10.767121 | orchestrator | Tuesday 03 March 2026 01:16:54 +0000 (0:00:00.530) 0:02:29.501 ********* 2026-03-03 01:19:10.767128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.767154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.767165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.767172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.767184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.767190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.767197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767279 | orchestrator | 2026-03-03 01:19:10.767285 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-03 01:19:10.767292 | orchestrator | Tuesday 03 March 2026 01:17:00 +0000 (0:00:05.570) 0:02:35.071 ********* 2026-03-03 01:19:10.767302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767340 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.767351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767392 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:19:10.767399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767496 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:19:10.767505 | orchestrator | 2026-03-03 01:19:10.767515 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-03 01:19:10.767525 | orchestrator | Tuesday 03 March 2026 01:17:01 +0000 (0:00:00.666) 0:02:35.737 ********* 2026-03-03 01:19:10.767534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767602 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.767612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767672 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:19:10.767691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-03 01:19:10.767701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-03 01:19:10.767713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-03 01:19:10.767734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-03 01:19:10.767744 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:19:10.767751 | orchestrator | 2026-03-03 01:19:10.767757 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-03 01:19:10.767763 | orchestrator | Tuesday 03 March 2026 01:17:01 +0000 (0:00:00.864) 0:02:36.602 ********* 2026-03-03 01:19:10.767776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.767793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.767800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.767806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.767813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.767820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.767835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.767933 | orchestrator | 2026-03-03 01:19:10.767943 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-03 01:19:10.767954 | orchestrator | Tuesday 03 March 2026 01:17:07 +0000 (0:00:05.348) 0:02:41.951 ********* 2026-03-03 01:19:10.767964 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-03 01:19:10.767976 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-03 01:19:10.767986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-03 01:19:10.767997 | orchestrator | 2026-03-03 01:19:10.768003 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-03 01:19:10.768010 | orchestrator | Tuesday 03 March 2026 01:17:09 +0000 (0:00:01.850) 0:02:43.801 ********* 2026-03-03 01:19:10.768016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.768023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.768041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.768051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.768058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.768065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.768071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768148 | orchestrator | 2026-03-03 01:19:10.768154 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-03 01:19:10.768160 | orchestrator | Tuesday 03 March 2026 01:17:25 +0000 (0:00:16.270) 0:03:00.072 ********* 2026-03-03 01:19:10.768167 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768173 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.768180 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.768186 | orchestrator | 2026-03-03 01:19:10.768192 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-03 01:19:10.768198 | orchestrator | Tuesday 03 March 2026 01:17:26 +0000 (0:00:01.597) 0:03:01.670 ********* 2026-03-03 01:19:10.768205 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768211 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768220 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768227 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768233 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768239 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768246 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768252 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768258 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768264 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768270 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768276 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768282 | orchestrator | 2026-03-03 01:19:10.768288 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-03 01:19:10.768295 | orchestrator | Tuesday 03 March 2026 01:17:32 +0000 (0:00:05.483) 0:03:07.153 ********* 2026-03-03 01:19:10.768301 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768307 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768317 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768323 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768330 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768336 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768342 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768349 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768355 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768361 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768368 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768374 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768380 | orchestrator | 2026-03-03 01:19:10.768386 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-03 01:19:10.768393 | orchestrator | Tuesday 03 March 2026 01:17:38 +0000 (0:00:05.537) 0:03:12.691 ********* 2026-03-03 01:19:10.768404 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768451 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768458 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-03 01:19:10.768465 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768471 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768477 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-03 01:19:10.768483 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768490 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768496 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-03 01:19:10.768502 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768508 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768514 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-03 01:19:10.768520 | orchestrator | 2026-03-03 01:19:10.768527 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-03 01:19:10.768533 | orchestrator | Tuesday 03 March 2026 01:17:43 +0000 (0:00:05.205) 0:03:17.896 ********* 2026-03-03 01:19:10.768539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.768552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.768563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-03 01:19:10.768577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.768584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.768591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-03 01:19:10.768597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-03 01:19:10.768675 | orchestrator | 2026-03-03 01:19:10.768682 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-03 01:19:10.768688 | orchestrator | Tuesday 03 March 2026 01:17:47 +0000 (0:00:04.019) 0:03:21.916 ********* 2026-03-03 01:19:10.768694 | orchestrator | skipping: [testbed-node-0] 2026-03-03 01:19:10.768701 | orchestrator | skipping: [testbed-node-1] 2026-03-03 01:19:10.768707 | orchestrator | skipping: [testbed-node-2] 2026-03-03 01:19:10.768713 | orchestrator | 2026-03-03 01:19:10.768719 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-03 01:19:10.768725 | orchestrator | Tuesday 03 March 2026 01:17:47 +0000 (0:00:00.295) 0:03:22.211 ********* 2026-03-03 01:19:10.768736 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768742 | orchestrator | 2026-03-03 01:19:10.768748 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-03 01:19:10.768755 | orchestrator | Tuesday 03 March 2026 01:17:49 +0000 (0:00:02.385) 0:03:24.597 ********* 2026-03-03 01:19:10.768761 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768767 | orchestrator | 2026-03-03 01:19:10.768785 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-03 01:19:10.768796 | orchestrator | Tuesday 03 March 2026 01:17:52 +0000 (0:00:02.340) 0:03:26.937 ********* 2026-03-03 01:19:10.768802 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768809 | orchestrator | 2026-03-03 01:19:10.768823 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-03 01:19:10.768830 | orchestrator | Tuesday 03 March 2026 01:17:54 +0000 (0:00:02.543) 0:03:29.481 ********* 2026-03-03 01:19:10.768836 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768842 | orchestrator | 2026-03-03 01:19:10.768848 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-03 01:19:10.768854 | orchestrator | Tuesday 03 March 2026 01:17:57 +0000 (0:00:02.886) 0:03:32.367 ********* 2026-03-03 01:19:10.768861 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768867 | orchestrator | 2026-03-03 01:19:10.768873 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-03 01:19:10.768879 | orchestrator | Tuesday 03 March 2026 01:18:20 +0000 (0:00:23.004) 0:03:55.372 ********* 2026-03-03 01:19:10.768885 | orchestrator | 2026-03-03 01:19:10.768892 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-03 01:19:10.768898 | orchestrator | Tuesday 03 March 2026 01:18:20 +0000 (0:00:00.073) 0:03:55.446 ********* 2026-03-03 01:19:10.768904 | orchestrator | 2026-03-03 01:19:10.768910 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-03 01:19:10.768916 | orchestrator | Tuesday 03 March 2026 01:18:20 +0000 (0:00:00.068) 0:03:55.515 ********* 2026-03-03 01:19:10.768923 | orchestrator | 2026-03-03 01:19:10.768929 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-03 01:19:10.768935 | orchestrator | Tuesday 03 March 2026 01:18:20 +0000 (0:00:00.073) 0:03:55.588 ********* 2026-03-03 01:19:10.768941 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768947 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.768954 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.768960 | orchestrator | 2026-03-03 01:19:10.768966 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-03 01:19:10.768972 | orchestrator | Tuesday 03 March 2026 01:18:32 +0000 (0:00:11.161) 0:04:06.750 ********* 2026-03-03 01:19:10.768979 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.768985 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.768991 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.768997 | orchestrator | 2026-03-03 01:19:10.769003 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-03 01:19:10.769010 | orchestrator | Tuesday 03 March 2026 01:18:42 +0000 (0:00:10.737) 0:04:17.488 ********* 2026-03-03 01:19:10.769016 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.769022 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.769028 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.769035 | orchestrator | 2026-03-03 01:19:10.769041 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-03 01:19:10.769047 | orchestrator | Tuesday 03 March 2026 01:18:53 +0000 (0:00:10.696) 0:04:28.184 ********* 2026-03-03 01:19:10.769053 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.769060 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.769066 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.769072 | orchestrator | 2026-03-03 01:19:10.769078 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-03 01:19:10.769084 | orchestrator | Tuesday 03 March 2026 01:19:02 +0000 (0:00:08.621) 0:04:36.806 ********* 2026-03-03 01:19:10.769095 | orchestrator | changed: [testbed-node-0] 2026-03-03 01:19:10.769101 | orchestrator | changed: [testbed-node-2] 2026-03-03 01:19:10.769108 | orchestrator | changed: [testbed-node-1] 2026-03-03 01:19:10.769114 | orchestrator | 2026-03-03 01:19:10.769120 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:19:10.769127 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-03 01:19:10.769134 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:19:10.769140 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-03 01:19:10.769146 | orchestrator | 2026-03-03 01:19:10.769152 | orchestrator | 2026-03-03 01:19:10.769159 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:19:10.769165 | orchestrator | Tuesday 03 March 2026 01:19:08 +0000 (0:00:06.006) 0:04:42.813 ********* 2026-03-03 01:19:10.769175 | orchestrator | =============================================================================== 2026-03-03 01:19:10.769181 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.01s 2026-03-03 01:19:10.769188 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.27s 2026-03-03 01:19:10.769194 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.51s 2026-03-03 01:19:10.769200 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.17s 2026-03-03 01:19:10.769206 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.16s 2026-03-03 01:19:10.769212 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.85s 2026-03-03 01:19:10.769219 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.74s 2026-03-03 01:19:10.769225 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.70s 2026-03-03 01:19:10.769231 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.62s 2026-03-03 01:19:10.769237 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.74s 2026-03-03 01:19:10.769243 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.71s 2026-03-03 01:19:10.769253 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.60s 2026-03-03 01:19:10.769259 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.01s 2026-03-03 01:19:10.769266 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.86s 2026-03-03 01:19:10.769272 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.62s 2026-03-03 01:19:10.769278 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.57s 2026-03-03 01:19:10.769284 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.54s 2026-03-03 01:19:10.769290 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.48s 2026-03-03 01:19:10.769297 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.35s 2026-03-03 01:19:10.769303 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.25s 2026-03-03 01:19:10.769309 | orchestrator | 2026-03-03 01:19:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:13.804735 | orchestrator | 2026-03-03 01:19:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:16.849688 | orchestrator | 2026-03-03 01:19:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:19.887544 | orchestrator | 2026-03-03 01:19:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:22.927869 | orchestrator | 2026-03-03 01:19:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:25.972543 | orchestrator | 2026-03-03 01:19:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:29.023156 | orchestrator | 2026-03-03 01:19:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:32.066586 | orchestrator | 2026-03-03 01:19:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:35.116027 | orchestrator | 2026-03-03 01:19:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:38.161908 | orchestrator | 2026-03-03 01:19:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:41.205196 | orchestrator | 2026-03-03 01:19:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:44.248623 | orchestrator | 2026-03-03 01:19:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:47.292037 | orchestrator | 2026-03-03 01:19:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:50.337918 | orchestrator | 2026-03-03 01:19:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:53.381677 | orchestrator | 2026-03-03 01:19:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:56.424480 | orchestrator | 2026-03-03 01:19:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:19:59.469633 | orchestrator | 2026-03-03 01:19:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:20:02.511377 | orchestrator | 2026-03-03 01:20:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:20:05.557281 | orchestrator | 2026-03-03 01:20:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:20:08.601460 | orchestrator | 2026-03-03 01:20:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-03 01:20:11.645231 | orchestrator | 2026-03-03 01:20:12.005874 | orchestrator | 2026-03-03 01:20:12.009036 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Mar 3 01:20:12 UTC 2026 2026-03-03 01:20:12.009082 | orchestrator | 2026-03-03 01:20:12.335262 | orchestrator | ok: Runtime: 0:31:57.424782 2026-03-03 01:20:12.585069 | 2026-03-03 01:20:12.585217 | TASK [Bootstrap services] 2026-03-03 01:20:13.506458 | orchestrator | 2026-03-03 01:20:13.506671 | orchestrator | # BOOTSTRAP 2026-03-03 01:20:13.506691 | orchestrator | 2026-03-03 01:20:13.506703 | orchestrator | + set -e 2026-03-03 01:20:13.506715 | orchestrator | + echo 2026-03-03 01:20:13.506726 | orchestrator | + echo '# BOOTSTRAP' 2026-03-03 01:20:13.506740 | orchestrator | + echo 2026-03-03 01:20:13.506797 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-03 01:20:13.515651 | orchestrator | + set -e 2026-03-03 01:20:13.515743 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-03 01:20:18.282475 | orchestrator | 2026-03-03 01:20:18 | INFO  | It takes a moment until task b42f461e-aa9c-429d-9483-f461dc50806e (flavor-manager) has been started and output is visible here. 2026-03-03 01:20:27.046720 | orchestrator | 2026-03-03 01:20:21 | INFO  | Flavor SCS-1L-1 created 2026-03-03 01:20:27.046816 | orchestrator | 2026-03-03 01:20:22 | INFO  | Flavor SCS-1L-1-5 created 2026-03-03 01:20:27.046826 | orchestrator | 2026-03-03 01:20:22 | INFO  | Flavor SCS-1V-2 created 2026-03-03 01:20:27.046831 | orchestrator | 2026-03-03 01:20:22 | INFO  | Flavor SCS-1V-2-5 created 2026-03-03 01:20:27.046836 | orchestrator | 2026-03-03 01:20:22 | INFO  | Flavor SCS-1V-4 created 2026-03-03 01:20:27.046841 | orchestrator | 2026-03-03 01:20:23 | INFO  | Flavor SCS-1V-4-10 created 2026-03-03 01:20:27.046846 | orchestrator | 2026-03-03 01:20:23 | INFO  | Flavor SCS-1V-8 created 2026-03-03 01:20:27.046852 | orchestrator | 2026-03-03 01:20:23 | INFO  | Flavor SCS-1V-8-20 created 2026-03-03 01:20:27.046864 | orchestrator | 2026-03-03 01:20:23 | INFO  | Flavor SCS-2V-4 created 2026-03-03 01:20:27.046869 | orchestrator | 2026-03-03 01:20:23 | INFO  | Flavor SCS-2V-4-10 created 2026-03-03 01:20:27.046874 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-2V-8 created 2026-03-03 01:20:27.046879 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-2V-8-20 created 2026-03-03 01:20:27.046886 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-2V-16 created 2026-03-03 01:20:27.046893 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-2V-16-50 created 2026-03-03 01:20:27.046900 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-4V-8 created 2026-03-03 01:20:27.046907 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-4V-8-20 created 2026-03-03 01:20:27.046914 | orchestrator | 2026-03-03 01:20:24 | INFO  | Flavor SCS-4V-16 created 2026-03-03 01:20:27.046921 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-4V-16-50 created 2026-03-03 01:20:27.046928 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-4V-32 created 2026-03-03 01:20:27.046935 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-4V-32-100 created 2026-03-03 01:20:27.046942 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-8V-16 created 2026-03-03 01:20:27.046948 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-8V-16-50 created 2026-03-03 01:20:27.046954 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-8V-32 created 2026-03-03 01:20:27.046961 | orchestrator | 2026-03-03 01:20:25 | INFO  | Flavor SCS-8V-32-100 created 2026-03-03 01:20:27.046968 | orchestrator | 2026-03-03 01:20:26 | INFO  | Flavor SCS-16V-32 created 2026-03-03 01:20:27.046975 | orchestrator | 2026-03-03 01:20:26 | INFO  | Flavor SCS-16V-32-100 created 2026-03-03 01:20:27.046982 | orchestrator | 2026-03-03 01:20:26 | INFO  | Flavor SCS-2V-4-20s created 2026-03-03 01:20:27.046989 | orchestrator | 2026-03-03 01:20:26 | INFO  | Flavor SCS-4V-8-50s created 2026-03-03 01:20:27.046997 | orchestrator | 2026-03-03 01:20:26 | INFO  | Flavor SCS-4V-16-100s created 2026-03-03 01:20:27.047004 | orchestrator | 2026-03-03 01:20:26 | INFO  | Flavor SCS-8V-32-100s created 2026-03-03 01:20:29.480251 | orchestrator | 2026-03-03 01:20:29 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-03 01:20:39.501669 | orchestrator | 2026-03-03 01:20:39 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-03 01:20:39.583128 | orchestrator | 2026-03-03 01:20:39 | INFO  | Task b05ceb99-f84d-4d92-9e8e-834e2453162a (bootstrap-basic) was prepared for execution. 2026-03-03 01:20:39.583218 | orchestrator | 2026-03-03 01:20:39 | INFO  | It takes a moment until task b05ceb99-f84d-4d92-9e8e-834e2453162a (bootstrap-basic) has been started and output is visible here. 2026-03-03 01:21:25.994937 | orchestrator | 2026-03-03 01:21:25.995032 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-03 01:21:25.995044 | orchestrator | 2026-03-03 01:21:25.995053 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-03 01:21:25.995062 | orchestrator | Tuesday 03 March 2026 01:20:43 +0000 (0:00:00.067) 0:00:00.067 ********* 2026-03-03 01:21:25.995070 | orchestrator | ok: [localhost] 2026-03-03 01:21:25.995080 | orchestrator | 2026-03-03 01:21:25.995088 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-03 01:21:25.995096 | orchestrator | Tuesday 03 March 2026 01:20:45 +0000 (0:00:02.002) 0:00:02.070 ********* 2026-03-03 01:21:25.995106 | orchestrator | ok: [localhost] 2026-03-03 01:21:25.995114 | orchestrator | 2026-03-03 01:21:25.995122 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-03 01:21:25.995130 | orchestrator | Tuesday 03 March 2026 01:20:54 +0000 (0:00:08.211) 0:00:10.281 ********* 2026-03-03 01:21:25.995139 | orchestrator | changed: [localhost] 2026-03-03 01:21:25.995147 | orchestrator | 2026-03-03 01:21:25.995156 | orchestrator | TASK [Create public network] *************************************************** 2026-03-03 01:21:25.995164 | orchestrator | Tuesday 03 March 2026 01:21:02 +0000 (0:00:08.128) 0:00:18.410 ********* 2026-03-03 01:21:25.995172 | orchestrator | changed: [localhost] 2026-03-03 01:21:25.995180 | orchestrator | 2026-03-03 01:21:25.995191 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-03 01:21:25.995200 | orchestrator | Tuesday 03 March 2026 01:21:07 +0000 (0:00:05.264) 0:00:23.675 ********* 2026-03-03 01:21:25.995208 | orchestrator | changed: [localhost] 2026-03-03 01:21:25.995216 | orchestrator | 2026-03-03 01:21:25.995224 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-03 01:21:25.995232 | orchestrator | Tuesday 03 March 2026 01:21:13 +0000 (0:00:06.354) 0:00:30.029 ********* 2026-03-03 01:21:25.995240 | orchestrator | changed: [localhost] 2026-03-03 01:21:25.995248 | orchestrator | 2026-03-03 01:21:25.995256 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-03 01:21:25.995264 | orchestrator | Tuesday 03 March 2026 01:21:18 +0000 (0:00:04.598) 0:00:34.628 ********* 2026-03-03 01:21:25.995272 | orchestrator | changed: [localhost] 2026-03-03 01:21:25.995280 | orchestrator | 2026-03-03 01:21:25.995288 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-03 01:21:25.995305 | orchestrator | Tuesday 03 March 2026 01:21:22 +0000 (0:00:03.846) 0:00:38.475 ********* 2026-03-03 01:21:25.995313 | orchestrator | ok: [localhost] 2026-03-03 01:21:25.995321 | orchestrator | 2026-03-03 01:21:25.995330 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-03 01:21:25.995338 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-03 01:21:25.995386 | orchestrator | 2026-03-03 01:21:25.995395 | orchestrator | 2026-03-03 01:21:25.995403 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-03 01:21:25.995411 | orchestrator | Tuesday 03 March 2026 01:21:25 +0000 (0:00:03.495) 0:00:41.970 ********* 2026-03-03 01:21:25.995419 | orchestrator | =============================================================================== 2026-03-03 01:21:25.995427 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.21s 2026-03-03 01:21:25.995457 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.13s 2026-03-03 01:21:25.995468 | orchestrator | Set public network to default ------------------------------------------- 6.35s 2026-03-03 01:21:25.995477 | orchestrator | Create public network --------------------------------------------------- 5.26s 2026-03-03 01:21:25.995486 | orchestrator | Create public subnet ---------------------------------------------------- 4.60s 2026-03-03 01:21:25.995495 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2026-03-03 01:21:25.995504 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2026-03-03 01:21:25.995514 | orchestrator | Gathering Facts --------------------------------------------------------- 2.00s 2026-03-03 01:21:28.369575 | orchestrator | 2026-03-03 01:21:28 | INFO  | It takes a moment until task c73ba62c-66be-4682-8058-a349938cf1ae (image-manager) has been started and output is visible here. 2026-03-03 01:22:12.276126 | orchestrator | 2026-03-03 01:21:31 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-03 01:22:12.276233 | orchestrator | 2026-03-03 01:21:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-03 01:22:12.276248 | orchestrator | 2026-03-03 01:21:31 | INFO  | Importing image Cirros 0.6.2 2026-03-03 01:22:12.276259 | orchestrator | 2026-03-03 01:21:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-03 01:22:12.276295 | orchestrator | 2026-03-03 01:21:33 | INFO  | Waiting for image to leave queued state... 2026-03-03 01:22:12.276307 | orchestrator | 2026-03-03 01:21:35 | INFO  | Waiting for import to complete... 2026-03-03 01:22:12.276334 | orchestrator | 2026-03-03 01:21:46 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-03 01:22:12.276347 | orchestrator | 2026-03-03 01:21:46 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-03 01:22:12.276357 | orchestrator | 2026-03-03 01:21:46 | INFO  | Setting internal_version = 0.6.2 2026-03-03 01:22:12.276368 | orchestrator | 2026-03-03 01:21:46 | INFO  | Setting image_original_user = cirros 2026-03-03 01:22:12.276378 | orchestrator | 2026-03-03 01:21:46 | INFO  | Adding tag os:cirros 2026-03-03 01:22:12.276388 | orchestrator | 2026-03-03 01:21:46 | INFO  | Setting property architecture: x86_64 2026-03-03 01:22:12.276399 | orchestrator | 2026-03-03 01:21:47 | INFO  | Setting property hw_disk_bus: scsi 2026-03-03 01:22:12.276409 | orchestrator | 2026-03-03 01:21:47 | INFO  | Setting property hw_rng_model: virtio 2026-03-03 01:22:12.276420 | orchestrator | 2026-03-03 01:21:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-03 01:22:12.276430 | orchestrator | 2026-03-03 01:21:47 | INFO  | Setting property hw_watchdog_action: reset 2026-03-03 01:22:12.276440 | orchestrator | 2026-03-03 01:21:48 | INFO  | Setting property hypervisor_type: qemu 2026-03-03 01:22:12.276459 | orchestrator | 2026-03-03 01:21:48 | INFO  | Setting property os_distro: cirros 2026-03-03 01:22:12.276469 | orchestrator | 2026-03-03 01:21:48 | INFO  | Setting property os_purpose: minimal 2026-03-03 01:22:12.276479 | orchestrator | 2026-03-03 01:21:48 | INFO  | Setting property replace_frequency: never 2026-03-03 01:22:12.276490 | orchestrator | 2026-03-03 01:21:49 | INFO  | Setting property uuid_validity: none 2026-03-03 01:22:12.276500 | orchestrator | 2026-03-03 01:21:49 | INFO  | Setting property provided_until: none 2026-03-03 01:22:12.276510 | orchestrator | 2026-03-03 01:21:49 | INFO  | Setting property image_description: Cirros 2026-03-03 01:22:12.276521 | orchestrator | 2026-03-03 01:21:49 | INFO  | Setting property image_name: Cirros 2026-03-03 01:22:12.276555 | orchestrator | 2026-03-03 01:21:50 | INFO  | Setting property internal_version: 0.6.2 2026-03-03 01:22:12.276567 | orchestrator | 2026-03-03 01:21:50 | INFO  | Setting property image_original_user: cirros 2026-03-03 01:22:12.276578 | orchestrator | 2026-03-03 01:21:50 | INFO  | Setting property os_version: 0.6.2 2026-03-03 01:22:12.276591 | orchestrator | 2026-03-03 01:21:50 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-03 01:22:12.276604 | orchestrator | 2026-03-03 01:21:51 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-03 01:22:12.276616 | orchestrator | 2026-03-03 01:21:51 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-03 01:22:12.276627 | orchestrator | 2026-03-03 01:21:51 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-03 01:22:12.276642 | orchestrator | 2026-03-03 01:21:51 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-03 01:22:12.276653 | orchestrator | 2026-03-03 01:21:51 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-03 01:22:12.276663 | orchestrator | 2026-03-03 01:21:51 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-03 01:22:12.276673 | orchestrator | 2026-03-03 01:21:51 | INFO  | Importing image Cirros 0.6.3 2026-03-03 01:22:12.276683 | orchestrator | 2026-03-03 01:21:51 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-03 01:22:12.276692 | orchestrator | 2026-03-03 01:21:53 | INFO  | Waiting for image to leave queued state... 2026-03-03 01:22:12.276702 | orchestrator | 2026-03-03 01:21:55 | INFO  | Waiting for import to complete... 2026-03-03 01:22:12.276733 | orchestrator | 2026-03-03 01:22:05 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-03 01:22:12.276744 | orchestrator | 2026-03-03 01:22:06 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-03 01:22:12.276754 | orchestrator | 2026-03-03 01:22:06 | INFO  | Setting internal_version = 0.6.3 2026-03-03 01:22:12.276763 | orchestrator | 2026-03-03 01:22:06 | INFO  | Setting image_original_user = cirros 2026-03-03 01:22:12.276773 | orchestrator | 2026-03-03 01:22:06 | INFO  | Adding tag os:cirros 2026-03-03 01:22:12.276783 | orchestrator | 2026-03-03 01:22:06 | INFO  | Setting property architecture: x86_64 2026-03-03 01:22:12.276792 | orchestrator | 2026-03-03 01:22:06 | INFO  | Setting property hw_disk_bus: scsi 2026-03-03 01:22:12.276802 | orchestrator | 2026-03-03 01:22:06 | INFO  | Setting property hw_rng_model: virtio 2026-03-03 01:22:12.276811 | orchestrator | 2026-03-03 01:22:07 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-03 01:22:12.276820 | orchestrator | 2026-03-03 01:22:07 | INFO  | Setting property hw_watchdog_action: reset 2026-03-03 01:22:12.276830 | orchestrator | 2026-03-03 01:22:07 | INFO  | Setting property hypervisor_type: qemu 2026-03-03 01:22:12.276839 | orchestrator | 2026-03-03 01:22:07 | INFO  | Setting property os_distro: cirros 2026-03-03 01:22:12.276848 | orchestrator | 2026-03-03 01:22:08 | INFO  | Setting property os_purpose: minimal 2026-03-03 01:22:12.276857 | orchestrator | 2026-03-03 01:22:08 | INFO  | Setting property replace_frequency: never 2026-03-03 01:22:12.276867 | orchestrator | 2026-03-03 01:22:08 | INFO  | Setting property uuid_validity: none 2026-03-03 01:22:12.276876 | orchestrator | 2026-03-03 01:22:08 | INFO  | Setting property provided_until: none 2026-03-03 01:22:12.276885 | orchestrator | 2026-03-03 01:22:09 | INFO  | Setting property image_description: Cirros 2026-03-03 01:22:12.276905 | orchestrator | 2026-03-03 01:22:09 | INFO  | Setting property image_name: Cirros 2026-03-03 01:22:12.276914 | orchestrator | 2026-03-03 01:22:09 | INFO  | Setting property internal_version: 0.6.3 2026-03-03 01:22:12.276923 | orchestrator | 2026-03-03 01:22:10 | INFO  | Setting property image_original_user: cirros 2026-03-03 01:22:12.276933 | orchestrator | 2026-03-03 01:22:10 | INFO  | Setting property os_version: 0.6.3 2026-03-03 01:22:12.276942 | orchestrator | 2026-03-03 01:22:10 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-03 01:22:12.276951 | orchestrator | 2026-03-03 01:22:10 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-03 01:22:12.276960 | orchestrator | 2026-03-03 01:22:11 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-03 01:22:12.276969 | orchestrator | 2026-03-03 01:22:11 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-03 01:22:12.276979 | orchestrator | 2026-03-03 01:22:11 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-03 01:22:12.558425 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-03 01:22:14.829238 | orchestrator | 2026-03-03 01:22:14 | INFO  | date: list: │ 2026-03-03 01:22:16.925207 | orchestrator | │ 134 │ │ """Read all YAML files in self.CONF.images""" │ 2026-03-03 01:22:16.925212 | orchestrator | │ │ 2026-03-03 01:22:16.925216 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:224 │ 2026-03-03 01:22:16.925222 | orchestrator | │ in main │ 2026-03-03 01:22:16.925227 | orchestrator | │ │ 2026-03-03 01:22:16.925248 | orchestrator | │ 221 │ │ │ 2026-03-03 01:22:16.925253 | orchestrator | │ 222 │ │ # check local image definitions with yamale │ 2026-03-03 01:22:16.925257 | orchestrator | │ 223 │ │ if self.CONF.check or self.CONF.check_only: │ 2026-03-03 01:22:16.925261 | orchestrator | │ ❱ 224 │ │ │ self.validate_yaml_schema() │ 2026-03-03 01:22:16.925266 | orchestrator | │ 225 │ │ │ 2026-03-03 01:22:16.925270 | orchestrator | │ 226 │ │ if self.CONF.check_only: │ 2026-03-03 01:22:16.925274 | orchestrator | │ 227 │ │ │ return │ 2026-03-03 01:22:16.925279 | orchestrator | │ │ 2026-03-03 01:22:16.925283 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:1168 │ 2026-03-03 01:22:16.925287 | orchestrator | │ in validate_yaml_schema │ 2026-03-03 01:22:16.925291 | orchestrator | │ │ 2026-03-03 01:22:16.925296 | orchestrator | │ 1165 │ │ │ │ 2026-03-03 01:22:16.925300 | orchestrator | │ 1166 │ │ │ for file in files: │ 2026-03-03 01:22:16.925304 | orchestrator | │ 1167 │ │ │ │ try: │ 2026-03-03 01:22:16.925309 | orchestrator | │ ❱ 1168 │ │ │ │ │ data = yamale.make_data(file) │ 2026-03-03 01:22:16.925314 | orchestrator | │ 1169 │ │ │ │ │ yamale.validate(schema, data) │ 2026-03-03 01:22:16.925331 | orchestrator | │ 1170 │ │ │ │ except YamaleError as e: │ 2026-03-03 01:22:16.925335 | orchestrator | │ 1171 │ │ │ │ │ for result in e.results: │ 2026-03-03 01:22:16.925340 | orchestrator | │ │ 2026-03-03 01:22:16.925344 | orchestrator | │ /usr/local/lib/python3.13/site-packages/yamale/yamale.py:31 in make_data │ 2026-03-03 01:22:16.925353 | orchestrator | │ │ 2026-03-03 01:22:16.925358 | orchestrator | │ 28 def make_data(path=None, parser="PyYAML", content=None): │ 2026-03-03 01:22:16.925363 | orchestrator | │ 29 │ from . import readers │ 2026-03-03 01:22:16.925367 | orchestrator | │ 30 │ │ 2026-03-03 01:22:16.925371 | orchestrator | │ ❱ 31 │ raw_data = readers.parse_yaml(path, parser, content=content) │ 2026-03-03 01:22:16.925376 | orchestrator | │ 32 │ if len(raw_data) == 0: │ 2026-03-03 01:22:16.925380 | orchestrator | │ 33 │ │ return [({}, path)] │ 2026-03-03 01:22:16.925384 | orchestrator | │ 34 │ return [(d, path) for d in raw_data] │ 2026-03-03 01:22:16.925389 | orchestrator | │ │ 2026-03-03 01:22:16.925393 | orchestrator | │ /usr/local/lib/python3.13/site-packages/yamale/readers/yaml_reader.py:34 in │ 2026-03-03 01:22:16.925407 | orchestrator | │ parse_yaml │ 2026-03-03 01:22:16.925411 | orchestrator | │ │ 2026-03-03 01:22:16.925416 | orchestrator | │ 31 │ │ raise TypeError("Pass either path= or content=, not both") │ 2026-03-03 01:22:16.925420 | orchestrator | │ 32 │ if path is not None: │ 2026-03-03 01:22:16.925424 | orchestrator | │ 33 │ │ with open(path) as f: │ 2026-03-03 01:22:16.925433 | orchestrator | │ ❱ 34 │ │ │ return parse(f) │ 2026-03-03 01:22:16.925437 | orchestrator | │ 35 │ else: │ 2026-03-03 01:22:16.925441 | orchestrator | │ 36 │ │ return parse(StringIO(content)) │ 2026-03-03 01:22:16.925446 | orchestrator | │ 37 │ 2026-03-03 01:22:16.925450 | orchestrator | │ │ 2026-03-03 01:22:16.925454 | orchestrator | │ /usr/local/lib/python3.13/site-packages/yamale/readers/yaml_reader.py:12 in │ 2026-03-03 01:22:16.925458 | orchestrator | │ _pyyaml │ 2026-03-03 01:22:16.925463 | orchestrator | │ │ 2026-03-03 01:22:16.925467 | orchestrator | │ 9 │ │ Loader = yaml.CSafeLoader │ 2026-03-03 01:22:16.925480 | orchestrator | │ 10 │ except AttributeError: # System does not have libyaml │ 2026-03-03 01:22:16.925488 | orchestrator | │ 11 │ │ Loader = yaml.SafeLoader │ 2026-03-03 01:22:16.925492 | orchestrator | │ ❱ 12 │ return list(yaml.load_all(f, Loader=Loader)) │ 2026-03-03 01:22:16.925502 | orchestrator | │ 13 │ 2026-03-03 01:22:16.925507 | orchestrator | │ 14 │ 2026-03-03 01:22:16.925511 | orchestrator | │ 15 def _ruamel(f): │ 2026-03-03 01:22:16.925515 | orchestrator | │ │ 2026-03-03 01:22:16.925519 | orchestrator | │ /usr/local/lib/python3.13/site-packages/yaml/__init__.py:93 in load_all │ 2026-03-03 01:22:16.925524 | orchestrator | │ │ 2026-03-03 01:22:16.925528 | orchestrator | │ 90 │ loader = Loader(stream) │ 2026-03-03 01:22:16.925533 | orchestrator | │ 91 │ try: │ 2026-03-03 01:22:16.925537 | orchestrator | │ 92 │ │ while loader.check_data(): │ 2026-03-03 01:22:16.925541 | orchestrator | │ ❱ 93 │ │ │ yield loader.get_data() │ 2026-03-03 01:22:16.925545 | orchestrator | │ 94 │ finally: │ 2026-03-03 01:22:16.925550 | orchestrator | │ 95 │ │ loader.dispose() │ 2026-03-03 01:22:16.925554 | orchestrator | │ 96 │ 2026-03-03 01:22:16.925558 | orchestrator | │ │ 2026-03-03 01:22:16.925562 | orchestrator | │ /usr/local/lib/python3.13/site-packages/yaml/constructor.py:45 in get_data │ 2026-03-03 01:22:16.925567 | orchestrator | │ │ 2026-03-03 01:22:16.925571 | orchestrator | │ 42 │ def get_data(self): │ 2026-03-03 01:22:16.925575 | orchestrator | │ 43 │ │ # Construct and return the next document. │ 2026-03-03 01:22:16.925579 | orchestrator | │ 44 │ │ if self.check_node(): │ 2026-03-03 01:22:16.925584 | orchestrator | │ ❱ 45 │ │ │ return self.construct_document(self.get_node()) │ 2026-03-03 01:22:16.925588 | orchestrator | │ 46 │ │ 2026-03-03 01:22:16.925592 | orchestrator | │ 47 │ def get_single_data(self): │ 2026-03-03 01:22:16.925596 | orchestrator | │ 48 │ │ # Ensure that the stream contains a single document and constr │ 2026-03-03 01:22:16.925604 | orchestrator | │ │ 2026-03-03 01:22:16.925608 | orchestrator | │ in yaml._yaml.CParser.get_node:666 │ 2026-03-03 01:22:16.925615 | orchestrator | │ │ 2026-03-03 01:22:16.925620 | orchestrator | │ in yaml._yaml.CParser._compose_document:688 │ 2026-03-03 01:22:16.925624 | orchestrator | │ │ 2026-03-03 01:22:16.925628 | orchestrator | │ in yaml._yaml.CParser._compose_node:732 │ 2026-03-03 01:22:16.925632 | orchestrator | │ │ 2026-03-03 01:22:16.925640 | orchestrator | │ in yaml._yaml.CParser._compose_mapping_node:846 │ 2026-03-03 01:22:17.029036 | orchestrator | │ │ 2026-03-03 01:22:17.029111 | orchestrator | │ in yaml._yaml.CParser._compose_node:730 │ 2026-03-03 01:22:17.029120 | orchestrator | │ │ 2026-03-03 01:22:17.029127 | orchestrator | │ in yaml._yaml.CParser._compose_sequence_node:807 │ 2026-03-03 01:22:17.029134 | orchestrator | │ │ 2026-03-03 01:22:17.029140 | orchestrator | │ in yaml._yaml.CParser._compose_node:732 │ 2026-03-03 01:22:17.029147 | orchestrator | │ │ 2026-03-03 01:22:17.029153 | orchestrator | │ in yaml._yaml.CParser._compose_mapping_node:846 │ 2026-03-03 01:22:17.029159 | orchestrator | │ │ 2026-03-03 01:22:17.029165 | orchestrator | │ in yaml._yaml.CParser._compose_node:730 │ 2026-03-03 01:22:17.029172 | orchestrator | │ │ 2026-03-03 01:22:17.029198 | orchestrator | │ in yaml._yaml.CParser._compose_sequence_node:807 │ 2026-03-03 01:22:17.029205 | orchestrator | │ │ 2026-03-03 01:22:17.029211 | orchestrator | │ in yaml._yaml.CParser._compose_node:732 │ 2026-03-03 01:22:17.029217 | orchestrator | │ │ 2026-03-03 01:22:17.029223 | orchestrator | │ in yaml._yaml.CParser._compose_mapping_node:848 │ 2026-03-03 01:22:17.029230 | orchestrator | │ │ 2026-03-03 01:22:17.029236 | orchestrator | │ in yaml._yaml.CParser._parse_next_event:861 │ 2026-03-03 01:22:17.029258 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2026-03-03 01:22:17.029267 | orchestrator | ParserError: while parsing a block mapping 2026-03-03 01:22:17.029275 | orchestrator | in "/tmp/tmpeorll5ny/tmp1prgjvt1.yml", line 28, column 9 2026-03-03 01:22:17.029282 | orchestrator | did not find expected key 2026-03-03 01:22:17.029289 | orchestrator | in "/tmp/tmpeorll5ny/tmp1prgjvt1.yml", line 29, column 98 2026-03-03 01:22:17.717511 | orchestrator | ERROR 2026-03-03 01:22:17.717925 | orchestrator | { 2026-03-03 01:22:17.718030 | orchestrator | "delta": "0:02:04.330119", 2026-03-03 01:22:17.718100 | orchestrator | "end": "2026-03-03 01:22:17.326376", 2026-03-03 01:22:17.718158 | orchestrator | "msg": "non-zero return code", 2026-03-03 01:22:17.718211 | orchestrator | "rc": 1, 2026-03-03 01:22:17.718264 | orchestrator | "start": "2026-03-03 01:20:12.996257" 2026-03-03 01:22:17.718315 | orchestrator | } failure 2026-03-03 01:22:17.734038 | 2026-03-03 01:22:17.734174 | PLAY RECAP 2026-03-03 01:22:17.734251 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-03-03 01:22:17.734291 | 2026-03-03 01:22:17.936198 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-03 01:22:17.938659 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-03 01:22:18.704230 | 2026-03-03 01:22:18.704397 | PLAY [Post output play] 2026-03-03 01:22:18.722510 | 2026-03-03 01:22:18.722662 | LOOP [stage-output : Register sources] 2026-03-03 01:22:18.792650 | 2026-03-03 01:22:18.792991 | TASK [stage-output : Check sudo] 2026-03-03 01:22:19.657145 | orchestrator | sudo: a password is required 2026-03-03 01:22:19.832735 | orchestrator | ok: Runtime: 0:00:00.014601 2026-03-03 01:22:19.847842 | 2026-03-03 01:22:19.848001 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-03 01:22:19.886219 | 2026-03-03 01:22:19.886493 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-03 01:22:19.948154 | orchestrator | ok 2026-03-03 01:22:19.955449 | 2026-03-03 01:22:19.955585 | LOOP [stage-output : Ensure target folders exist] 2026-03-03 01:22:20.491112 | orchestrator | ok: "docs" 2026-03-03 01:22:20.491521 | 2026-03-03 01:22:20.808846 | orchestrator | ok: "artifacts" 2026-03-03 01:22:21.152047 | orchestrator | ok: "logs" 2026-03-03 01:22:21.169796 | 2026-03-03 01:22:21.169979 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-03 01:22:21.205822 | 2026-03-03 01:22:21.206051 | TASK [stage-output : Make all log files readable] 2026-03-03 01:22:21.529991 | orchestrator | ok 2026-03-03 01:22:21.536741 | 2026-03-03 01:22:21.536862 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-03 01:22:21.571168 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:21.582612 | 2026-03-03 01:22:21.583006 | TASK [stage-output : Discover log files for compression] 2026-03-03 01:22:21.610695 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:21.625622 | 2026-03-03 01:22:21.625793 | LOOP [stage-output : Archive everything from logs] 2026-03-03 01:22:21.668376 | 2026-03-03 01:22:21.668568 | PLAY [Post cleanup play] 2026-03-03 01:22:21.677170 | 2026-03-03 01:22:21.677282 | TASK [Set cloud fact (Zuul deployment)] 2026-03-03 01:22:21.744062 | orchestrator | ok 2026-03-03 01:22:21.755267 | 2026-03-03 01:22:21.755424 | TASK [Set cloud fact (local deployment)] 2026-03-03 01:22:21.780147 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:21.793872 | 2026-03-03 01:22:21.794025 | TASK [Clean the cloud environment] 2026-03-03 01:22:22.497700 | orchestrator | 2026-03-03 01:22:22 - clean up servers 2026-03-03 01:22:23.244183 | orchestrator | 2026-03-03 01:22:23 - testbed-manager 2026-03-03 01:22:23.328807 | orchestrator | 2026-03-03 01:22:23 - testbed-node-2 2026-03-03 01:22:23.410850 | orchestrator | 2026-03-03 01:22:23 - testbed-node-4 2026-03-03 01:22:23.505651 | orchestrator | 2026-03-03 01:22:23 - testbed-node-0 2026-03-03 01:22:23.595943 | orchestrator | 2026-03-03 01:22:23 - testbed-node-1 2026-03-03 01:22:23.688223 | orchestrator | 2026-03-03 01:22:23 - testbed-node-3 2026-03-03 01:22:23.777661 | orchestrator | 2026-03-03 01:22:23 - testbed-node-5 2026-03-03 01:22:23.869470 | orchestrator | 2026-03-03 01:22:23 - clean up keypairs 2026-03-03 01:22:23.885891 | orchestrator | 2026-03-03 01:22:23 - testbed 2026-03-03 01:22:23.907887 | orchestrator | 2026-03-03 01:22:23 - wait for servers to be gone 2026-03-03 01:22:34.797488 | orchestrator | 2026-03-03 01:22:34 - clean up ports 2026-03-03 01:22:35.418762 | orchestrator | 2026-03-03 01:22:35 - 1b88a272-8108-4864-b286-555aac3ac743 2026-03-03 01:22:35.859202 | orchestrator | 2026-03-03 01:22:35 - 1e04a5f6-b3e2-4dcd-bc7a-a4a42e7235a7 2026-03-03 01:22:36.056204 | orchestrator | 2026-03-03 01:22:36 - 3674f32f-312b-4f0f-a558-1deb73f263e3 2026-03-03 01:22:36.321280 | orchestrator | 2026-03-03 01:22:36 - 47cbe04b-de01-49cd-9786-4ba7840458a1 2026-03-03 01:22:36.533431 | orchestrator | 2026-03-03 01:22:36 - 7a7b744d-6042-427b-8d29-017ee1e3dc66 2026-03-03 01:22:36.734551 | orchestrator | 2026-03-03 01:22:36 - 9737e3b5-c03d-48c3-93af-d3d6fc78248c 2026-03-03 01:22:36.943010 | orchestrator | 2026-03-03 01:22:36 - fd8b4bf8-6e8d-43db-8896-57f752ec67aa 2026-03-03 01:22:37.139088 | orchestrator | 2026-03-03 01:22:37 - clean up volumes 2026-03-03 01:22:37.243541 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-3-node-base 2026-03-03 01:22:37.279217 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-manager-base 2026-03-03 01:22:37.322618 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-5-node-base 2026-03-03 01:22:37.364205 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-2-node-base 2026-03-03 01:22:37.404014 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-4-node-base 2026-03-03 01:22:37.443809 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-1-node-base 2026-03-03 01:22:37.482107 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-0-node-base 2026-03-03 01:22:37.521672 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-0-node-3 2026-03-03 01:22:37.561220 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-1-node-4 2026-03-03 01:22:37.609378 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-2-node-5 2026-03-03 01:22:37.650195 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-8-node-5 2026-03-03 01:22:37.691399 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-7-node-4 2026-03-03 01:22:37.734084 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-3-node-3 2026-03-03 01:22:37.779370 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-5-node-5 2026-03-03 01:22:37.818204 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-4-node-4 2026-03-03 01:22:37.860724 | orchestrator | 2026-03-03 01:22:37 - testbed-volume-6-node-3 2026-03-03 01:22:37.906716 | orchestrator | 2026-03-03 01:22:37 - disconnect routers 2026-03-03 01:22:38.028296 | orchestrator | 2026-03-03 01:22:38 - testbed 2026-03-03 01:22:39.100687 | orchestrator | 2026-03-03 01:22:39 - clean up subnets 2026-03-03 01:22:39.158383 | orchestrator | 2026-03-03 01:22:39 - subnet-testbed-management 2026-03-03 01:22:39.348195 | orchestrator | 2026-03-03 01:22:39 - clean up networks 2026-03-03 01:22:39.523412 | orchestrator | 2026-03-03 01:22:39 - net-testbed-management 2026-03-03 01:22:39.837215 | orchestrator | 2026-03-03 01:22:39 - clean up security groups 2026-03-03 01:22:39.885225 | orchestrator | 2026-03-03 01:22:39 - testbed-node 2026-03-03 01:22:39.992198 | orchestrator | 2026-03-03 01:22:39 - testbed-management 2026-03-03 01:22:40.106460 | orchestrator | 2026-03-03 01:22:40 - clean up floating ips 2026-03-03 01:22:40.141502 | orchestrator | 2026-03-03 01:22:40 - 81.163.193.90 2026-03-03 01:22:40.508670 | orchestrator | 2026-03-03 01:22:40 - clean up routers 2026-03-03 01:22:40.602394 | orchestrator | 2026-03-03 01:22:40 - testbed 2026-03-03 01:22:41.852550 | orchestrator | ok: Runtime: 0:00:19.268870 2026-03-03 01:22:41.855247 | 2026-03-03 01:22:41.855365 | PLAY RECAP 2026-03-03 01:22:41.855492 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-03 01:22:41.855552 | 2026-03-03 01:22:41.993675 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-03 01:22:41.996326 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-03 01:22:42.777083 | 2026-03-03 01:22:42.777247 | PLAY [Cleanup play] 2026-03-03 01:22:42.794021 | 2026-03-03 01:22:42.794166 | TASK [Set cloud fact (Zuul deployment)] 2026-03-03 01:22:42.849329 | orchestrator | ok 2026-03-03 01:22:42.857757 | 2026-03-03 01:22:42.857903 | TASK [Set cloud fact (local deployment)] 2026-03-03 01:22:42.883285 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:42.893603 | 2026-03-03 01:22:42.893725 | TASK [Clean the cloud environment] 2026-03-03 01:22:44.005029 | orchestrator | 2026-03-03 01:22:44 - clean up servers 2026-03-03 01:22:44.477333 | orchestrator | 2026-03-03 01:22:44 - clean up keypairs 2026-03-03 01:22:44.493190 | orchestrator | 2026-03-03 01:22:44 - wait for servers to be gone 2026-03-03 01:22:44.532952 | orchestrator | 2026-03-03 01:22:44 - clean up ports 2026-03-03 01:22:44.626413 | orchestrator | 2026-03-03 01:22:44 - clean up volumes 2026-03-03 01:22:44.704763 | orchestrator | 2026-03-03 01:22:44 - disconnect routers 2026-03-03 01:22:44.732741 | orchestrator | 2026-03-03 01:22:44 - clean up subnets 2026-03-03 01:22:44.761936 | orchestrator | 2026-03-03 01:22:44 - clean up networks 2026-03-03 01:22:44.887498 | orchestrator | 2026-03-03 01:22:44 - clean up security groups 2026-03-03 01:22:44.924754 | orchestrator | 2026-03-03 01:22:44 - clean up floating ips 2026-03-03 01:22:44.949062 | orchestrator | 2026-03-03 01:22:44 - clean up routers 2026-03-03 01:22:45.429706 | orchestrator | ok: Runtime: 0:00:01.341891 2026-03-03 01:22:45.433533 | 2026-03-03 01:22:45.433714 | PLAY RECAP 2026-03-03 01:22:45.433845 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-03 01:22:45.433917 | 2026-03-03 01:22:45.569529 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-03 01:22:45.570649 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-03 01:22:46.361007 | 2026-03-03 01:22:46.361173 | PLAY [Base post-fetch] 2026-03-03 01:22:46.377362 | 2026-03-03 01:22:46.377554 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-03 01:22:46.433010 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:46.448233 | 2026-03-03 01:22:46.448509 | TASK [fetch-output : Set log path for single node] 2026-03-03 01:22:46.496604 | orchestrator | ok 2026-03-03 01:22:46.506238 | 2026-03-03 01:22:46.506397 | LOOP [fetch-output : Ensure local output dirs] 2026-03-03 01:22:46.987906 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/work/logs" 2026-03-03 01:22:47.268977 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/work/artifacts" 2026-03-03 01:22:47.539590 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8999bda436aa4417a08a6d306d807d2f/work/docs" 2026-03-03 01:22:47.561455 | 2026-03-03 01:22:47.561623 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-03 01:22:48.525467 | orchestrator | changed: .d..t...... ./ 2026-03-03 01:22:48.526000 | orchestrator | changed: All items complete 2026-03-03 01:22:48.526079 | 2026-03-03 01:22:49.254934 | orchestrator | changed: .d..t...... ./ 2026-03-03 01:22:49.983913 | orchestrator | changed: .d..t...... ./ 2026-03-03 01:22:50.005511 | 2026-03-03 01:22:50.005642 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-03 01:22:50.034013 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:50.037531 | orchestrator | skipping: Conditional result was False 2026-03-03 01:22:50.049444 | 2026-03-03 01:22:50.049568 | PLAY RECAP 2026-03-03 01:22:50.049654 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-03 01:22:50.049716 | 2026-03-03 01:22:50.180281 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-03 01:22:50.183873 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-03 01:22:50.945503 | 2026-03-03 01:22:50.945671 | PLAY [Base post] 2026-03-03 01:22:50.960690 | 2026-03-03 01:22:50.960823 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-03 01:22:51.928580 | orchestrator | changed 2026-03-03 01:22:51.938612 | 2026-03-03 01:22:51.938741 | PLAY RECAP 2026-03-03 01:22:51.938815 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-03 01:22:51.938910 | 2026-03-03 01:22:52.057123 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-03 01:22:52.059862 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-03 01:22:52.882590 | 2026-03-03 01:22:52.882773 | PLAY [Base post-logs] 2026-03-03 01:22:52.893587 | 2026-03-03 01:22:52.893716 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-03 01:22:53.345297 | localhost | changed 2026-03-03 01:22:53.355856 | 2026-03-03 01:22:53.356004 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-03 01:22:53.382177 | localhost | ok 2026-03-03 01:22:53.385545 | 2026-03-03 01:22:53.385650 | TASK [Set zuul-log-path fact] 2026-03-03 01:22:53.403566 | localhost | ok 2026-03-03 01:22:53.413861 | 2026-03-03 01:22:53.413987 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-03 01:22:53.440885 | localhost | ok 2026-03-03 01:22:53.444974 | 2026-03-03 01:22:53.445092 | TASK [upload-logs : Create log directories] 2026-03-03 01:22:53.966719 | localhost | changed 2026-03-03 01:22:53.972054 | 2026-03-03 01:22:53.972213 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-03 01:22:54.511702 | localhost -> localhost | ok: Runtime: 0:00:00.009491 2026-03-03 01:22:54.521285 | 2026-03-03 01:22:54.521536 | TASK [upload-logs : Upload logs to log server] 2026-03-03 01:22:55.103243 | localhost | Output suppressed because no_log was given 2026-03-03 01:22:55.107045 | 2026-03-03 01:22:55.107230 | LOOP [upload-logs : Compress console log and json output] 2026-03-03 01:22:55.161513 | localhost | skipping: Conditional result was False 2026-03-03 01:22:55.166217 | localhost | skipping: Conditional result was False 2026-03-03 01:22:55.179130 | 2026-03-03 01:22:55.179356 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-03 01:22:55.228176 | localhost | skipping: Conditional result was False 2026-03-03 01:22:55.228629 | 2026-03-03 01:22:55.232970 | localhost | skipping: Conditional result was False 2026-03-03 01:22:55.241241 | 2026-03-03 01:22:55.241452 | LOOP [upload-logs : Upload console log and json output]